The Wisdom of Crowds

 

ISBN: 9780385721707

READ: 2017-03-13

AUTHOR: James Surowiecki

 

 

An important book on the development of crowd mechanisms. I found particularly informative a bunch of psychological and sociological experiments on crowd behaviour, as well as a precious analysis of what makes the crowd complexively smarter rather than dumber. The main ingredient? Let individuals be fairly independent, decentralised thinkers, and aggregate their predictions. This trend is in line with much of mid-20th century philosophy, the transition from an aristocratic conception of knowledge to a democratic one – being the Pulitzer-winner book The Open Society And Its Enemies by philosopher Karl Popper the manifesto of that shift. George Soros was inspired by Popper when he called his philanthropic trust Open Society Foundations; to add a bit of red thread, in one of the late chapters, the author argues for the significance of short-sellers in a healthy market.

Surowiecki deals well with market mechanisms; the collective decision-making leap in politics is daunting and much more complicated, but it is waiting for us ahead. I think this book makes a valuable contribution in that direction, it sheds some light on the positive outcome of collective behaviour. Tellingly, the crowd which makes better decisions isn’t guided by groupthink: it is rather a collection of diverse, autonomously thinking individuals, who get to the best overall action/prediction by acting upon their distinctive informative edges. An indirect praise to the importance of critical thinking and of authentic, responsible sharing of knowledge.

 

Open questions: short-term vs long-term thinking and the role of virtues

In the talk of market bubbles, Surowiecki sheds light on the gap between short term and long term vision, what Nassim Taleb calls out as lack of ‘skin in the game’. So, which one should we focus on? And why do long-term projects work better for everyone? There is an understandable sense in which acting short-term makes things easier – patience, long-term sustained focus and commitment are hard to put in practice, after all. But why should these qualities be valuable? Is it a kind of folk belief that great things need time to be achieved? Is it just a plain fact, and what makes us consider them valuable is their being rare? A physiological component here may be, among others, that of brain hardwiring, the famous 10.000-hour practice thing. I think that the ability to engage in flow-state brain activity, which has been marvellously described by Mihály Csíkszentmihályi, is equally hard to get at for its dinamicity; everyone who has occasionally engaged in meditation or in any other focussed practice knows what I am talking about. The 10,000-hour rule seems to point out at a plain psychological fact; the flow-like component points at something which is hard to achieve, and which because of that greatly rewards attention and equilibrate effort with bliss and timelessness. It seems from both sides that we are hardwired to work better, achieve better results and feel better with long-term thinking. Is it a mere biological need? If so, what’s its underlying driving force?

The Stanford marshmallow experiment only shows that those pupils who waited for the later reward (which would sum up to the first reward) have generally performed better on SAT tests, etc. The question is: why would delayed gratification (which explains ability of self-control, patience, ability to commit oneself, and so on) be biologically more advantageous? And if that is so, provided we know why, wouldn’t that also imply that virtues are important from a biological point of view?

 

I think of myself as a reflective person, so I usually wait until I know what I am talking about in order to speak. We are all bound to have incomplete information, but checking in usually helps to get a broader picture, and we know that not everyone engages in some form of reflection; so being talkative doesn’t mean that someone has done his homework, and several experiments have shown how talking at the beginning of a meeting would likely direct its course. The talkative speaker would set the bar of the discussion. Here the old fast-and-slow thinking dilemma presents itself: fast thinking isn’t necessarily the best one, but it comes out faster and shapes the environment before the slow thinker comes around.

A possible way out of it is to recognise that slow thinking has its advantages. So when people take up quite a long time to make up their minds, it may be valuable to put additional effort in listening to them, because they’ll often come up with something that hasn’t been discussed yet, and we are naturally driven to march along with the agenda which has been set out right at the beginning. Awareness of confirmation bias risks and a willingness to take each other accountable may prevent us to get into trouble.

 

Notes on prosocial behaviour, virtues and collaboration

What makes collaborative behaviour efficient is that people don’t have to repeatedly keep each other in check. Being virtues even though nobody controls you is an ethically interesting topic. Spinoza would say that only that is true virtue, because it is not compromised by fear of judgement. I’d like to look at the behavioural component of it, though. If I explore my mental mechanisms, any behaviour would presumably be enhanced in its quality because once a decision has been made once and for all (that is, once we decide to be virtuous and not to cheat on anybody) that will bring me a kind of peace of mind from no having to deal with mind-change anymore. It is hard to keep promises, but it is almost impossible to stay sane while breaking one’s mental patterns over and over. From a purely biological perspective, whereas altruists genes have a precious role in societal mechanism, they’ll always find a way to reproduce themselves. From an individual standpoint, firmly ethically established people may have some reproductive advantages, by signalling partners the valuable traits of trustworthiness and the ability to stand up for one own’s values, which signals that once the bond within a couple is set up, they would be willing to stand up for their partner as well.

“Knowledge is not real knowledge unless it has been verified and discussed among other people. You don’t gain valuable knowledge by locking yourself up in a room.” – That has been historically true for science, but it is increasingly important in the humanities too. If science has made progress a lot faster than other disciplines, that may also be because of its clear goals. Clear goals supply scientists with a set of broadly-shared assumptions, and that in turn makes them work together more effectively. As humanities set out on being more collaborative, where projects get funded for teams with a largely diverse professional spectrum, it seems likely that they would need tools to achieve a more precise view of their overall aims, making ethical discourse all the more valuable. Ethical issues (especially if seen as a kind of coordination problems) become especially relevant as people have to increasingly work together.

 

The Skeptical Tradition

The Skeptical Tradition - M. Burnyeat, University of California Press

 

ISBN: 978-0520037472

READ: 2016-12-24

EDITOR: Myles Burnyeat

 

Revisiting how ancient skepticism has shaped the history of western thought has been a valuable philosophical exercise. Once the skeptical problem has been recognised for its full epistemologically destabilising powers, I believe, it is the philosopher’s duty to find an adequate response to the threat of the very possibility of knowledge. Indeed, as Kant remarked, present argumentative weapons might not have been developed without the influence of skeptical thought1)Barry Stroud, ‘Kant and Skepticism’, in The Skeptical Tradition (University of California Press, 1983) .

I will dare to present in the following brief chapters the most peculiar and striking features of the struggle for and against the skeptical challenge, as it has been carried out in the history of ancient and modern thought.

 

The Motivation of Greek Skepticism – an essay by David Sedley

Ancient skepticism stood for the impossibility of saying anything about the nature of external objects, that people could only say how they appeared to be through sensory perception. Although ancients never fully questioned the existence of external objects, Sextus Empiricus’s skeptical armament leaves that possibility widely open. Besides realism, another characteristic feature of ancient times was to consider philosophy as a therapeutic mean; it is no surprise, then, that even skepticism was set forth as a compelling recipe to attain happiness.

Although the Skeptic chose (literally) to be called an ‘Inquirer’, as to underline his open-mindedness in opposition with the disposition of the dogmatist, he will expose his commitment to suspension of belief by asserting that “to every argument an equal argument is opposed”. This is to say that although he still has not found conclusive reasons to prefer one argument over another, he would not change his mind even in the future: he will eventually find new proofs to balance dogmatic arguments, bringing him back to epoche2)David Sedley, “The Motivation of Greek Skepticism”, in The Skeptical Tradition (University of California Press, 1983) p.21. Furthermore, if epoche could somehow be justified as an end, since no convincing argument has been produced yet to move one to assent, it is much less conclusive that one should uphold ataraxia as the philosophical aim, as Sextus does. In Sextus’s framework, epoche is the mean to get to ataraxia. The story goes that ataraxia emerged as if ‘by chance’ in skeptics’ minds after embracing epoche, but that certainly does not dispel the ghost of belief bias as a support of an unquestioned ethical commitment.

 

 

The Stoicism of The New Academy – an essay by Pierre Coussin

Arcisilaus, one of the teachers of the Platonic Academy, shows through a reductio ab absurdum that the Stoic sage has to end up withholding assent: since having an opinion means to assent to something, not to have any opinion means that he must not assent to anything. Thus Arcisilaus’ skepticism would be a consequence of Stoic doctrine, not one of his own. “Since everything is inapprehensible, the Sage can only give assent to the inapprehensible, so he will withhold assent.”3)Pierre Coussin, “The Stoicism of The New Academy”, in The Skeptical Tradition (University of California Press, 1983) p.35

While Stoic, and therefore Academic, epoche is voluntary, skeptics are constantly in a dimension of doubt, they don’t assent nor deny. To Epicureans assent bears no meaning, since all perceptions are true. Arcisilaus moved from Stoic material to develop his counterarguments to induce epoche necessarily; he retained a pragmatic take of skepticism by affirming that withholding assent shouldn’t prevent one from acting, as he will act upon what appears to him. Academic philosophy is therefore presented by Coussin as an heterodox Stoicism, for it would not have existed without Stoic logic; Academic criticisms helped de facto the refinement of Stoic arguments.

 

The Ten Tropes of Aenesidemus  – an essay by Gisela Striker

The Ten Tropes rests on two different strategies: undecidability – by leaving open the possibility of eventually finding out what is true – and relativity – which aims at showing that there cannot be something we can infer to be true about the nature of anything. Since the latter is more of a negative dogmatism, the kind of skepticism which is usually attributed to the New Academy, Phyrronian skepticism relies on the first to bring on its epoche, and following ataraxia.

The inconsistency which Striker brings about is that the Tropes, as Sextus presents them, are made to underline the argument of relativity. The author stresses that it is not a matter of negligence: Sextus had to do so, in order to impress upon the reader the persistence of undecidability by pointing out how things appear to be different in relation to the observer, and even more so in their relational content.

 

Can the Skeptic Live his Skepticism? – an essay by Myles Burnyeat

The Humean challenge to Phyrronian skepticism: a skeptic cannot live up to his ‘theoretical’ standards because

  1. what the Phyrronist invalidates through reasoning is nothing less than reason and belief;
  2. a man cannot live without believing something; the argument is therefore that it will be impossible to live as a skeptic.

What Sextus answers:

  1. that he can easily give up believing in something and simply live by what appearances say about the world;
  2. that giving up reason can be done as a result of argumentation, i.e, of reason.
In Skepticism, conflicting arguments are equipollent, they ‘carry the same weight’; to dismiss the skeptical challenge, it should be enough to employ a polished probabilistic theory of assessment, and gather a little more data to move the scale’s needle one way or the other.

 

Burnyeat reminds us that in ancient times the Skeptic dealt exclusively with the true essence of external objects, since all ancient philosophical traditions hold up realism; true or false propositions could be made about the nature of things which are non-evident, whereas appearances did not have any truth value, because they imposed themselves upon the subject as evident.4)Psychology has been decisive in highlighting the unreliability of appearances’ reports, i.e. the stimulus-error. Thus Burnyeat remarks that

“All belief is unreasonable precisely because, as we are now seeing, all belief concerns real existence as opposed to appearance.” p.122

Sextus’s claim is that after our inquiry has been carried out, once we face the undecidability of conflicting beliefs, we find ourselves in the necessity of giving up belief; the Skeptic has no problem with that, he even claims that epochè brings him to experience ataraxia, and he can hence live by following appearances and withhold assent regarding the true nature of non-evident things.

 

How is then one to simply follow appearances to guide his living? Here is the skeptical recipe for conducting a life without belief:

  1. the Skeptic acts under guidance of nature;
  2. he is constrained by what the body demands (physical needs), he is thus not absolutely impassible; rather, suspension of belief will greatly alleviate his sufferings by removing additional misery to occasional physical distress;
  3. he will follow customs and traditions, while suspending judgement on their truth or falseness;
  4. he practices an art of any kind, so that he will be busy with something.5)Myles Burnyeat, “Can the Skeptic Live His Skepticism?”, in The Skeptical Tradition (University of California Press, 1983) p.126

It is important to notice that when Sextus talks about “appearances”, he refers to “both objects of sense and objects of thought.”6)ibid, p.127

Thus in Sextus there is no opposition or choice between appearances and realities, rather “questions about how something appears and questions about how it really and truly is”7)ibid, p.129, and the latter is shown to be impossibile due to the undecidability of conflicting appearances.

It turns out, then, that the life without belief is not the mental blank one might at first imagine it to be. It is not even limited as to the subject matter over which the sceptic’s thoughts may range. Its secret is rather an attitude of mind manifest in his thoughts. He notes the impression things make on him and the contrary impressions they make on other people, and his own impressions seem to him no stronger, no more plausible, than anyone else’s.

[…] Thus the withdrawal from truth and real existence becomes, in a certain sense, a detachment from oneself.” p.129

 

As I have already remarked, the Skeptic is passive in his assent to impressions, and he is equally forced to suspend belief. How does then ataraxia follow? Hellenistic moral psychology held that emotions depend on belief; thus, removing beliefs would alleviate from feeling good or bad about something, i.e. emotions, which would not eliminate the physical constraints of hunger, thirst, and so on. “The life without belief is not an achievement of the will but a paralysis of reason by itself”8)ibid, p.133; it is as though Sextus wants to bring us to a pre-conceptual, animal heaven, where the development of meaning is suppressed in name of unmovable bliss. One may of course wonder how joyfulness can be achieved without building a sense of how one’s life should be led.

 

Anyway, can we really say that Sextus completely eschews belief? We may interpret his most stark sentences (“all things appear relative”) as a belief or as a chronicle of a held belief, or as a chronicle of a belief that he tends to support.

One possible defence is that by suspending judgement, the Skeptic simply does not take on the dogmatic type of belief, that which asserts the impossibility of knowing anything about the non-evident. The point is, it is quite hard to distinguish between a non-dogmatic and a dogmatic belief, since dogma can be understood in a broader sense too, that of accepting perceptual experience as it is, of which Sextus makes large use of; moreover, it is impossible to disengage from belief without breaking its link with the truth.9)ibid, p.137 Since belief has an unbreakable connection with truth, and since Sextus claimed that we are forced to suspend belief about the truth of any proposition, we cannot say that he held any belief.

 

There is another point to be made about the supposedly non-epistemic quality of Sextus’s appearance-statements: to hold that p (suspension of belief) because of a certain argument (the undecidability between conflicting appearances) “is hardly to be distinguished from coming to believe that p is true with that argument as one’s reason.”10)ibid, p.138

“If the sceptic works thorough reasoned argument to the point where the reasons on either side balance and reason stultifies itself, if his arguments are (in the now famous phrase) a ladder to be thrown over when you have climbed up, then we must insist that they make their impact through the normal operations of our reason.” p.139

Again, the main problem should be that of trying to account for ataraxia as the indisputable result of the skeptical enterprise. If the Skeptic claims, as he in fact is, to remain open to further inquiry, that should not be understood as the possibility of discovering that in fact some arguments may be stronger than others, for that outlook would secretly imply a search for answers, and with it a kind of anxiety that the Skeptic deliberately wants to get rid of.

Ataraxia is hardly to be attained if he is not in some sense satisfied – so far – that no answers are forthcoming, that contrary claims are indeed equal. And my question is: How can Sextus then deny that this is something he believes?

I do not think he can. Both the causes (reasoned arguments) of the state which Sextus calls appearance and its effects (tranquillity and the cessation of emotional disturbance) are such as to justify us in calling it a state of belief. p.140

In other words, the Skeptic could not really achieve ataraxia if, to some extent, she would not put herself in a position where she tends to think that opposite claims would carry the same weight on and on; we should then count this first-level disposition as ‘belief’. That is to say, that reasons do shape the Skeptic’s thinking process, even though she reports that reason has no epistemic value in defining what is true. A life without belief would thus not be possible.

 

Ancient Skepticism and Causation – an essay by Jonathan Barnes

The problem with Sextus’s attack to causation is that the language he uses to complete his arguments bears inevitably some causal power. He is a champion of Life, in opposition to Philosophy and Belief; and Life is the realm of Common Sense.

“Skepticism is directed against Belief or dogma; dogma is defined as ‘assent to some item from among the nonevident objects of inquiring in the sciences.’ … It emerges that the ‘nonevident objects’ in question are things ‘which do not have a nature capable of falling under immediate observation.’” p.157

Thus Skeptics retain the common sense knowledge of causation, that which can be known by evidence, and discard first-principle-like notions of causality.

 

A problem with skeptical arguments in general, if they are not well-crafted, is to leverage an argument from disagreement to induce skepticism about certain issues. Barnes although remarks that it is not the mere fact of disagreeing that should cast doubt, rather reasons which support that disagreement:

“Admittedly, if there is reasoned disagreement, then that may cast doubt upon the explanation; but in that case is not the disagreement itself, but rather the reasons for it, which cast the doubt. … It is not your disbelief, but the reasons for your disbelief, which may properly lead me to doubt.” p.166

 

A further criticism about causation, which will be detected and improved upon by Gassendi (see the following chapter), is Aenesidemus’s second mode, by which every theory that one could come up with will be underdetermined by sense-data.11)An account of underdetermination of theories by data in contemporary skeptical arguments could be found in my short review (forthcoming) of Stroud’s The Significance of Philosophical Scepticism, 1984 (New York, Oxford)  12)Between page 173 and 175, for those who might be particularly interested in how Stoics refined their theory of causation under the pressure of skeptical arguments, it is described the process by which the theory of agency causation develops into a theory of event causation: it is not an agent being the cause of an effect, rather a specific event to be cause of another

 

The other argument which is employed by Skeptics against causation relies upon the Modes of Agrippa, by which one should not be able to infer the cause of anything without falling in an infinite regression of causality. Here is how Barnes answers to the skeptical challenge, charging the skeptic to have proposed an ignoratio elenchi:

“If the proponent of efficient causation is to ground his thesis, then he must indeed produce a ‘cause’ or reason for believing that there are causes; but he is under no obligation to cite an efficient cause.” p.179

Barnes nevertheless grants that the skeptical argument has some power, for it forces the dogmatist to narrow and clarify his theory of causation:

“If we are to justify a belief in causes, we cannot do so by any direct form of argumentation: either causes – or, if I may use the expression, ‘becauses’ – are a fundamental presupposition of thought; or they are immediate data of experience – things to be perceived, not to be argued for; or their existence must be shown by way of some ‘transcendental’ argument.” pp.179-180

 

Another issue with causation is the problem of time: the skeptical argument tries to show that (1) Causes precede their effects in time, and (2) Causes do not precede their effects in time; therefore, causes do not exist. We can easily picture a possible example for each: (1) We say that when a ball breaks the window, then causes precede their effects; (2) We should not be able to call the ball a ’cause’ of the window being broken before that fact actually happens; cause and effect are modal reciprocal, and they do therefore exist at the same time. In other words, because A exists as a cause of B exactly when B is produced, then it is not possible for A to exist as a cause of B prior to B being produced as an effect of A.

How should one tackle this problem? Barnes suggests that the misleading point in the argument is to consider causing as a datable event, instead of simply analysing two events – the thrown ball and the broken window – one of which entails the other in a causal relationship.

“In an outmoded jargon, causal relations are not real but rational. The fundamental error in Sextus’s main argument against causation is that of treating causing as a datable event, an occurrence in the world. It is a piquant thought that we can refute a skeptical argument against causation by insisting that causation itself is unreal.” p.186

By ‘causation is unreal’, again, the author means that causation cannot be identified as an event that could be somehow dated between cause-event A and effect-event B.

 

Augustine against the Skeptics – an essay by Christopher Kirwan

The original , pre-Cartesian thought that Augustine raises to meet the skeptical challenge is the “Si fallor, sum”:

“If Augustine believes something erroneously, he exists

If Augustine exists, he does not believe erroneously that he exists.” (p.221)

These two propositions jointly met Zeno’s condition for knowledge, by which something in order to be true must bear a sign that cannot come from anything else but its proposition, so that the proposition must have the following features: to be believed by someone, without it possibly be believed erroneously.

“The former condition [that the proposition is believed by someone] is fulfilled by anyone who recognises [any proposition] as (a) among his own beliefs, and sees the force of Augustine’s proof that (b) he cannot believe [it] erroneously; and the latter condition is fulfilled by anyone who sees the force of the simple little proof just given that no falsehood can possess features (a) and (b) jointly. If anything can be manifest, these facts can be.” p.221

 

The Rediscovery of Ancient Skepticism – an essay by C.B. Schmitt

Some historical details about the revival of Skepticism in the Renaissance: the latin word scepticus itself began to be used in that time, after Diogenes Laertius’s work had been translated.

Another interesting fact about the influence of skeptical thought is that Skepticism survived in Byzantium, shaping in part Eastern Christianity’s theology. It was then brought in Europe through Italy during the 15th century, and subsequently, in the 16th century, as Italian Renaissance began to fade out, it migrated to Northern Europe.

As with Platonic ideas, which have been exploited for different purposes during the history of thought, Skepticism in the modern era has been used both for and against religion.

 

Gassendi and Skepticism – an essay by Ralph Walker

The Modern Era brings about an even more radical turn for skptical arguments: he problem of the external world13)With regards to the problem of the external world, see my short review of Stroud’s The Significance of Philosophical Scepticism (forthcoming) . In fact, Sextus’s Skepticism goes beyond questioning the true nature of things, it threatens the truthfulness of all objective claims: the only thing he recognises as certain are mental representations (phantasiai).

 

Gassendi rephrases the problem of rationalism as follows: to rely upon rational human faculty by arguments that already assume its trustworthiness. Most empiricists acknowledge the circularity and reject to ground their statements on reason alone, by claiming that one must attain knowledge only through sense-perception; by the very issue of underdetermination of theories by data, which we previously encountered in Barnes’s essay, it is hard to understand anyway how empiricists could ground any of their theories on sense-data alone.

Following the same argumentative line, Gassendi anticipated Mill and Quine in the refutation of a priori knowledge. Quine, however, discredits Skepticism altogether by appealing to that very refutation:14)an extensive chapter about Quine and the skeptical problem of the external world could be found Stroud, 1984

“The very impossibility of satisfying the skeptic’s demand shows (or so it may be held) that the demand itself was out of place.” (p.332)

The only justification which someone could appeal to in order to meet the quest for knowledge is the commonsensical one, that which is held by many or accepted by those who we regard as authorities… in short, just what Aristotle claimed science should consist of.

As a naturalised epistemologist, though, Quine must be very careful to reject a priori knowledge and normative justifications: at that point he would just break his auto-imposed descriptive constraints and embrace himself a normative prescription.

A naturalised epistemologist, therefore,

“to the contention that knowledge is possible a priori can only reply by showing – or trying to show – that he can account for all that we believe we know without having need of that hypothesis. He cannot show that a priori knowledge is not possible; he can only argue that we do not, in fact, possess any.” p.332

 

Descartes’s Use of Skepticism – an essay by Bernard Williams

It is impossible to speak of Skepticism without including Descartes in the overview. Because the role of skeptical reasoning in Descartes’s philosophical project has been covered so widely in the literature, I want to highlight just a couple of points about Descartes’s concern with his pragmatic quest for science, by virtue of which he overthrows Skepticism as a mere loss of time.15)For a broader sketch of Descartes’s work, you may want to have a look at Descartes

 

After Descartes embarks on his hyperbolical skepticism, he is forced to find a solution, and comes up with the cogito and the God justification. Gassendi and Arnould, among others, immediately pointed out many limitations. We can look at how badly Descartes wants to overcome the skeptical impasse, by remembering that after he had recognised how perceptions could be systematically deceived, and after establishing the cogito and the divine warrant, however precarious it may seem, he rushes to set up the rational foundations of science, so that sensory perceptions can be properly systematised. His quest for objective knowledge is undeniably a quest for certainty.

That surely is a platitude; but it might be useful to remember Descartes’s pragmatic side to appreciate why he rejected so fiercely any skeptical challenge.

“The skeptic has no reasonable claim, in terms of practical reason, to make us spend time going round and round his problems, rather than making genuine progress with the problems of science.” (p.350)

 

Locke and Pyrrhonism: The Doctrine of Primary and Secondary Qualities – an essay by Martha Brandt Bolton

One of the favourite skeptical strategies is to point out at the incompatibility of sense-perception: that if S1 states that the honey is sweet, while S2 founds that it is not so, then we don’t really know what the nature of honey is. Locke answered by separating primary qualities – those which constitute the essence of an object: solidity, extension, motion, number and figure – from secondary qualities: being secondary qualities those which do not speak of the nature of the object, their eventual incompatibility cannot show the object to be unknowable; with respect to primary qualities, Locke thinks of them to be either true or false – the perception of an object shape, for example, if carried out in the same conditions, could hardly be mistaken between roundness and cubicness.

 

 The Tendency of Hume’s Skepticism – an essay by Robert J. Fogelin

“Hume’s skepticism and naturalism meet, for the state of moderate skepticism is viewed as a result of two causal factors: radical Phyrronian doubt on one side being moderated by our natural (animal) propensities to believe in the other.” p.399

 

Hume’s strategy is somehow dreadful: his mastery of skeptical arguments brings him initially to reduce knowledge to mere probability; then, almost as to show how destructive the skeptical progression becomes as it gains momentum, he reduces first-order probabilities to second-order probabilities, that is: probabilities of true perception can be doubted, so that the only available solution would that of formulating probabilities of true judgement about perceptions; and because those second-order probabilities could be doubted further, through an infinite regress, Hume ultimately gets from knowledge ‘to nothing’. Not even the so-called peritrope – which is often used to show that the skeptic undermines his own position – would work: skeptical reasonings survive, sometimes strengthened, sometimes weakened, ‘according to the successive dispositions of the mind’ (Hume). We may grant that

“Skeptical arguments are self-refuting, but this only puts us on a treadmill, since setting aside our skepticism and returning to the canons of reason inevitably puts us on the road yet another skeptical impasse. For Hume, skepticism is completely immune to rational refutation. Indeed, it is the fated end of all reasoning pursued without restraint.” p.402

 

Hume appeals then to the idea of naturalism as the ‘best explanation’ to account for the intuitive force of beliefs, in spite of the strength of the skeptical position – which, we should be reminded, states that epoche is not just a suspension of judgement, but of belief itself.

“Hume’s central idea seems to be this: If belief were fixed by processes of reasoning, then the skeptical argument […] would drive all those who have considered it to a state of total suspension of belief. Indeed, in our closet, such skeptical reflections can come very close to inducing this extreme state. Yet when we return to the affairs of daily life, our ordinary beliefs come rushing back upon us and our previous state will now strike us […] as amusing. But the restoration of belief is not a matter of reasoning and therefore cannot be explained on any of the traditional theories of belief formation where it is assumed that the mind comes to its beliefs by a process of reasoning.” p.403-4

We can appreciate how Hume posits that the traditional theory of belief is wrong, inasmuch as it is exclusively rational. It is not an attack to rationality, either; Hume simply wants to remark that in order to describe how it is possible that someone may come to believe anything after he has been brought to epoche by the skeptic, one needs to argue that some basic beliefs may not have any rational ground at all:

“Hume’s theoretical skepticism concerns arguments. In its various manifestations it shows the groundlessness of given beliefs. It is not aimed at nor does it have any tendency to diminish the force of those beliefs that spring up in us naturally.” p.407

Hume is a skeptic who finds himself unable to stop from having any belief. He deems that it is his natural instinct, which makes it so; it is not a state he actively reaches through some way of reasoning, he plainly finds himself in that position.

“In sum, Hume’s skepticism and his naturalism meet in a causal theory of skepticism itself.” p.410

 

Kant and Skepticism – an essay by Barry Stroud

This last chapter is somehow different: I liked Stroud’s lines so much for their clarity, that I could have barely written something better than a bad paraphrasing. Here we will focuse on Kant’s response to skepticism, a fierce reply to the ‘scandal to human philosophy and to human reason in general’, that ‘the existence of things outside us … must be accepted merely on faith16)Immanuel Kant, Critique of Pure Reason

 

The heart of the matter is that until one retains epistemic priority of perceptions over external reality, that is, that internal states are knowable in an easier way than its outer corresponding objects, then skepticism is the only possible solution. The question, therefore, relies on the argument that experience, in order to be possible, must necessarily be an immediate consciousness of an external state of affairs.

“If it is true that ‘inner experience in general’ is possible only if ‘outer experience in general’ is possible, and ‘outer experience’ is the immediate, direct perception of external things, then in order to know of the existence of things around us it is not required that we determine in each case or in general that there is an external reality corresponding to our perceptions.” p.418-9

“The realism that [Kant] wishes to show is the only correct view would deny the inferential and therefore problematic character of our knowledge of things around us in space. That is precisely why it is the only correct view; it is the only view that is incompatible with skeptical idealism, and hence the only view that can explain how our knowledge of the world is possible.” p.419

Idealism though has been shifted to the higher level of explanation, that of transcendental knowledge:

“The objects we perceive around us in space are dependent on our sensibility and our understanding. It is only because that is true that we can perceive those objects directly and therefore can be noninferentially certain of their reality. So some form of idealism is required, after all, in order to explain how our knowledge of the world is possible.” p.420

The epistemological twist that allows Kant to (presumably) annihilate the skeptic is what has been famously called the ‘Copernican Revolution’:

“Space and things in space are to be seen as ‘empirically’ real but ‘transcendentally’ ideal. Although idealism and realism are incompatible, they do not conflict if one is understood ‘transcendentally’ and the other ’empirically’. That is precisely Kant’s solution to the problem of how human knowledge is possible.” p.420

 

Objects do really exist out of us – in an empirical way, they are real; they further need a priori knowledge in order to be knowable by us:

“Realism is in part the view that objects exist in space independently of human or other perceiving beings, so it is quite obvious that at least that part of realism, understood ‘empirically’, is true.” p.421

“A transcendental investigation examines the necessary conditions of knowledge in general; it is the search for an understanding of how any knowledge at all is possible. And for Kant that amounts to an investigation of what we must know a priori if any knowledge of objects is to be possible.

[…] We could discover a priori, independently of experience, what the general conditions of knowledge are, only if those conditions were in some sense ’supplied by us’ or had their ‘source’ somehow ‘is us’, the knowing subjects, and not simply in an independently existing world.” p.422

How does that all link to the Cartesian problem? The Kantian solution forbids to extend particular doubt to a global skepticism:

“Kant’s refutation of idealism is meant to prove that if Descartes’s negative conclusion were true it would violate one of the conditions that make any experience at all possible.

[…] If we have any experience at all, we must be capable of direct experience of ‘outer’ things that exist quite independently of us in space, so our access to, and hence our knowledge about, things in space must be direct and unproblematic in a way that is invulnerable to philosophical attack of the sort Descartes tries to mount.” p.429

 

Transcendental idealism is quite peculiar in its own form: it allows ordinary language to talk about the objective existence of external objects, providing an adequate explanation of that very knowledge which aims at making it invulnerable to the skeptic’s spell.

“Our direct and unproblematic access to objects around us in space is possible, according to Kant, only because the things we are directly aware of in experience are appearances and are dependent on us. That idealist thesis in turn implies that we can have knowledge only about those things that are dependent on us. But when we say or believe in everyday life that we see a pencil or a piece of paper and thereby know that it is there, and we also believe that pencils and pieces of paper are things that are not dependent on us, we are not saying or believing anything that contradicts those idealist theses.” p.430

“In our ordinary empirical judgements about reality we do not commit ourselves one way or the other on the question whether reality in general matches up with or corresponds to the way it is perceived to be; so in claiming knowledge or certainty about the world we do no commit ourselves to the falsity of philosophical skepticism. Therefore we do not have to show on each occasion how we know that philosophical scepticism is false in order for our ordinary assertions of knowledge and certainty to be true and fully legitimate.” p.431

“The way in which the ordinary judgements are in general legitimised secures the result that, in making them, we are saying nothing about the way things are, transcendentally speaking.” p.432

 

The very last part of the essay sketches out one of the main 20th century antiskeptical arguments, the positivist verifiability principle of meaningfulness. The verifiability principle states that in order to gain knowledge about the world, one must be able to tell whether that knowledge is true or false, that is, it must be verifiable by empirical inquiry. Therefore, every proposition about the world that is verifiable, would be deemed as meaningful, while any proposition which lacks that property – precisely what fits the skeptical doubt about the external world – would simply be meaningless, and not worthy of any examination.

Stroud argues that such move is Kantian in spirit, although its authors (Carnap & Co.) wanted to remove any appeal to transcendental talk. It remains unclear whether the antiskeptical structure of the argument will resist, had the transcendental proof be removed.17)In Stroud, 1984, the author claims that in fact, it does not.

References   [ + ]

1. Barry Stroud, ‘Kant and Skepticism’, in The Skeptical Tradition (University of California Press, 1983)
2. David Sedley, “The Motivation of Greek Skepticism”, in The Skeptical Tradition (University of California Press, 1983) p.21
3. Pierre Coussin, “The Stoicism of The New Academy”, in The Skeptical Tradition (University of California Press, 1983) p.35
4. Psychology has been decisive in highlighting the unreliability of appearances’ reports, i.e. the stimulus-error.
5. Myles Burnyeat, “Can the Skeptic Live His Skepticism?”, in The Skeptical Tradition (University of California Press, 1983) p.126
6. ibid, p.127
7. ibid, p.129
8. ibid, p.133
9. ibid, p.137
10. ibid, p.138
11. An account of underdetermination of theories by data in contemporary skeptical arguments could be found in my short review (forthcoming) of Stroud’s The Significance of Philosophical Scepticism, 1984 (New York, Oxford) 
12. Between page 173 and 175, for those who might be particularly interested in how Stoics refined their theory of causation under the pressure of skeptical arguments, it is described the process by which the theory of agency causation develops into a theory of event causation: it is not an agent being the cause of an effect, rather a specific event to be cause of another
13. With regards to the problem of the external world, see my short review of Stroud’s The Significance of Philosophical Scepticism (forthcoming) 
14. an extensive chapter about Quine and the skeptical problem of the external world could be found Stroud, 1984
15. For a broader sketch of Descartes’s work, you may want to have a look at Descartes
16. Immanuel Kant, Critique of Pure Reason
17. In Stroud, 1984, the author claims that in fact, it does not.

Descartes

1042848

 

ISBN: 9780195075908

READ: 2016-12-04

Author: Georges Dicker

 

 

Meditation I : The Method of Doubt

As an introduction, we should note that Descartes embarks in the challenge of doubting everything with a purpose: to discover if there may be some certainty to build upon, with the same method you would use to inspect a basket of apples and separate rotten ones from the unspoiled.

Can we say that if one ventures to find some principles, especially when there is no empirical data to confront them against with as it is the case of Descartes in his Meditation I, the purpose attached to that research is what will define the outcome? By doubting everything, Descartes emerges in the end with a strong, true principle; sceptics don’t.

By the time Descartes undertook his philosophical quest, the scientific revolution of the 17th century had already wiped out the teleological view of nature, and the universe became to be conceived as a great machine. The true novelty with respect to the Medieval Ages is that there is no place for purpose in the universe anymore. (p.6)

 

After analysing the first meditation, Dicker proceeds to investigate whether the three main sceptical arguments – the Deceptiveness of the Senses, the Dreamer Argument and the Deceiver Argument – are self-defeating, an issue into which sceptics can run.

The first is dismissed as non self-defeating, for Descartes argues that senses should not be trusted completely, not that they shouldn’t be trusted at all. Therefore, he can affirm that because senses are sometimes deceptive, they should not be entirely trusted.

The second argument is in fact self-defeating: how could Descartes tell whether his dream perceptions and wake perceptions have the same vividness, without him actually knowing what the difference is between dreams and wakefulness? Dicker concedes although that for the sake of Descartes’ argument, one does not need to be certain of the difference between the two states to suspend judgement about the issue: having a belief about that is sufficient.

With concern to the third argument, Dicker points out that while the conclusion is that all senses provide no certainty, the premiss is an analytical (= a priori) argument; therefore, the Deceiver Argument doesn’t imply any contradiction.

An analytical argument is what can be known to be true just by thinking – given, of course, that words need first to be learned through experience. Once words are learned, anyway, analytical statements are true by definition, i.e. do not need any empirical (a posteriori) evidence to be confirmed. The underlying conceptual frame of Descartes’ Deceiving Argument is, according to the author, the causal conception of perception (CCP):

For any person S and material object M, S perceives M at time t only if M is a cause of S’s perceptual experience at t. (p.30)

 

How does one show that this statement is analytical and true?

Dicker’s demonstration:

  1. take a contradictory statement: “I see a pen, but it is not the case that a pen is one of the causes of my perceptual experience.”
  • negate the contradiction, making it thus an analytical statement:
  1. “If I see a pen, then a pen is a cause of my present visual experience”
  • Logical proof of the negation: 1. has the form p and not-q, its negation is not(p and not-q) -> if p, then q:
  1. “I see a pen only if a pen is a cause of my present visual experience”
  • where 2. form if p, then q has been translated into p only if q

Note that analiticity is hereditary, from 1. to 3. (see p.32-33)

 

 

Meditation II : The Cogito and the Self

Descartes can summarise that he exists because even though he doubted that he could exist, the very fact of doubting proves that he exists as a ‘thinking thing’. He tests the cogito against the Deceiving Argument and finds out that if a demon were deceiving him, Descartes would need to exist anyway to be deceived.

Note that the latin cogito, ergo sum wants to express is sense of continuity, that is that until I’m engaged in the process of thinking, I exist;

I am thinking, therefore I exist

Descartes employs here a positive doctrine: that all reflexive judgements (or subjective meta-thoughts: beliefs about one own’s thoughts) should be true.

 

One may ask why couldn’t Descartes carry out his skepticism through the assumption that he might be insane. It is although legitimate to dismiss that option, for a person that considers himself to be insane cannot pretend to discover any truth by means of philosophical reasoning; it is therefore possible to reject all possible instances of confuse perception that may have let Descartes think he had two opposing beliefs about what he perceived (example from Dicker: “a person who thought he believed he saw a horse while really disbelieving this”, p.48 – what contemporary philosophers would classify as a type of “Moore paradox”), for they would make the case of an insane subject.

 

A basic problem with the cogito: “What entitles Descartes to use the first-person pronoun “I” in the premiss of his proof?” (p.48), i.e. that “I” am thinking?

  • in this form, the proof of existence is question-begging;
  • following Russell’s interpretation of “I” as simply of grammatical convenience, we would derive the proof “There is a thought, therefore I exist”, which is invalid;
  • Jaakko Hintikka1)https://www.researchgate.net/publication/273109869_Cogito_Ergo_Sum_Inference_or_Performance proposed to view the cogito as a performance, that the very fact of questioning one’s existence makes it evident of one’s existence beyond any argument; this interpretation has been criticised as too narrow (the proof of existence would rely on the specific thought of doubting one’s existence, but Descartes wanted the whole thinking experience to be such a proof) and it should be supported by an argument anyway to justify why specifically that procedure would guarantee the existence proof.
Dicker turns therefore to explain the main assumption underlying the cogito: the substance theory. Substance theory, in opposition to the bundle theory, asserts that a thing is a collection of properties plus an underlying, ‘reference’ substance. The substance theory is supported against the bundle theory from the Argument from Change – that “A human mind is a substance, since even if all its determinate properties change, it is still the same mind.” (p.53)

Once this basic assumption has been recognised, and once we acknowledge that Descartes assumes that a substance cannot exist without properties, and viceversa, the cartesian thought would naturally be considered a property of an underlying substance, namely the ego: “He knows what a thought is, he knows that it is an attribute and not a substance. Again, by the light of nature, he knows that every such attribute must belong to a substance. Se ho concludes to the existence of the substance of which the thought he perceives is an attribute, This he calls ego; or, if you like, he concludes that the “I” in “I am thinking” does refer to a substance and is not just a grammatical convenience.” (Anthony Kenny, quoted in Dicker at p.56)

 

Critical points with the Substance Theory:

Substance is ‘unperceivable in principle’, therefore many empiricists reject it; that is not enough, for one needs to reject the underlying Argument for Change, not such an easy task. Dicker mentions contemporary attempts inspired by Locke’s work to defeat the Argument, by appealing to the concept of “spatio-temporal continuity”.

 

Critical points with the “substance-property principle”

It is not obvious that properties must depend on a substance in order to exist. The issue here points to the controversial problem of universals. The three main competing theories in that field are Platonic Realism, Moderate Realism (born with Aristotle; what Descartes takes as the assumption) and Nominalism (supported by the English empiricists). Needless to say, each of them face some difficulties.

One more assumption that Descartes did not acknowledge: that thoughts must be properties, rather than substances – something he takes for granted and does not proceed to demonstrate.

What Descartes cannot do, anyway, is to assign the existing thing which has just been derived from the aforementioned premisses to the pronoun “I”: we may grant that the thinking thing exists, but there is are no grounds by which we could say that this substance is someone – not myself more than you, or the platonic world-soul. To recap: from “I am thinking” does not follow that “I exist”.

Here I found helpful to quote at full length the following philosophical proof:

“it is impossible to prove one’s own existence. If this is correct, then Descartes’s error was not that he held that “I exist” is certain – on that point he was surely right – but that he held that “I exist” can be proved from “I am thinking”.” (p.63)

 

Dicker makes explicit Descartes’s parallel between the concept of a material thing – the example of the wax, which bears the same concept in face of its changing properties – and the thinking substance – again, the substrate of changing cogitationes, thoughts. This lays the foundations of Cartesian Dualism: a thinking unextended substance (res cogitans) and an extended, unthinking one (res extensa).

By the end of Meditation II, Descartes gets to the point where he can be sure to know only that he is a thinking thing, but he does not exclude the possibility of being a body, something that he will try to demonstrate in Meditation IV; Cartesian Dualism has therefore not being established yet – we only have a hint to it.

 

 

Meditation III : The Existence of God and the Criterion of Truth

Derivation of the criterion of truth from the cogito:

“If something could be clearly and distinctly perceived yet false, then this would shed doubt on the cogito itself. But the cogito is absolutely indubitable. Therefore, what is clearly and distinctly perceived cannot be false; so it must be true.” (p.84)

We must here further remember that the cogito is a complex structure, composed by “I am thinking”, “‘I am thinking’ entails ‘I exist’” and “I exist”. “I am thinking”, as we have seen is certain; the second is certain because the entailment is obvious; “I exist” is certain because it follows from two certain propositions. The criterion of truth will be used by Descartes to establish the existence of God, mind-body dualism and the existence of a material world.

Descartes can be absolutely certain of the clearness and distinctiveness of his perception while he is focused on that perception, but he still doubts whether the very fact of perceiving something clearly and distinctly can be taken as a criterion of truth. Thus, he addresses the issue of God: he must show that he exists, and that he does not deceive.

 

Introducing the Cosmological Argument for the Existence of God

For Descartes, God is the ultimate, necessary cause of the effects we experience; here effects cannot be physical, for the existence of a physical world has not been postulated yet: Descartes thus grounds his argument upon the fact that his idea of God must have its cause in God himself.

For Descartes, an idea is something that must exist; it is like a picture of something, it has a thing as an object, but is nor true nor false, since the idea simply exists. Truth and falseness are only coupled with judgements or inferences about ideas. In considering ideas as pictures, Descartes differentiates their status between ‘more objective’ – ideas that represent a substance – and ‘less objective’ – those which picture a property; more specifically, the degree of reality of different ideas matches the degree of reality of their objects if they would exist. At the same time, ideas can be said to have the same ontological footing, when considered as ‘modes of thinking’ – states of mind that come and go.

We are proceeding to lay down what Dicker calls “the core argument”, the proof of God’s existence. We should now add another useful tool to fully understand the coming argument, which is that Descartes employs the principle by which something cannot proceed from a cause that is less real than its effect. Descartes will may his point clearer in the Principles of Philosophy (part 1, no.17) by making the example of a complex machine, which cannot be more complex than the man who designed it.2)The idea that the Universe must have a cause, by taking the existence of complex objects as a proof of the existence of God, will be challenged by Darwinians; the most recent and popular example has been given in The Blind Watchmaker, by biologist R. Dawkins. The scholastic rule that “everything must have a cause” is justified by Descartes with the Latin Ex nihilo, nihil fit.

Moreover, note that modes of thought have a lower degree of realty than a finite substance, and the latter has a lower degree of reality than an infinite substance. All things being thus considered, the core argument could be presented as follows:

  1. “The cause of an idea must have as much reality as the idea represents its object as having.
  2. Only a perfect God has as much reality as my idea of God represents him as having.
  3. The cause of my idea of God is a perfect God (from propositions 1 and 2)
  4. A perfect God really exists.” (p.99)
An interesting point here is that by assuming the existence of a necessary formal cause to the idea of God, and by recognising that Descartes himself could not be that cause, for he does not have nor a superior nor an equal reality status than the idea of God has, Descartes postulates that there must be something else outside him, thus ending his solipsism.

Let’s not forget that the core argument has been brought about to support the criterion of truth. Once the existence of God has been established, and because God is by definition a perfect being, since deceiving would be an imperfection, God cannot be a deceiver. Therefore, every clear and distinct perception, as it is caused by God, must be true.

 

Critical points: the ‘precontainment argument’

As Hume clearly pointed out, even upon scrupulous inquiry, it is impossible to recognise an effect within its cause, as they must be separate; the Humean theory of causality says that cause and effect are events, which possess a distinctiveness that could hardly be accounted for if the effect was somehow contained in the cause. Dicker, however, proposes to take the Cartesian version of causality not so strictly, rather to consider it more commonsensically: that “the cause must precontain the reality of the effect” (p.112). Such doctrine is quite problematic anyway, as any evolutionary biologist would tell you. The last possible proposed interpretation was given by Mackie – that Cartesian causality can be considered as a sort of conservation principle; the point here is that scientific conservation principles stem from inference, and could therefore not be used by Descartes in a context where external reality hasn’t been proved yet.

 

Critical points: degrees of reality

Hobbes was the first to ask for a grounding explanation to the doctrine of degrees of reality. Descartes continued to consider it ‘self-evident’, and the only thing we can be certain about is that God, in Descartes’s mind, has a higher degree of reality because he can exist without substances, while the opposite cannot be true. The same strategy, though, does not work on a substance-property scale, for although properties cannot be “free-floating”, it is also true that there cannot be a substance without properties; to infer that properties have a lower degree of reality than substances from the fact that properties change while substances don’t is an altogether different criterion than the previously employed dependence-independence rule, and the doctrine is therefore not much coherent.

 

Critical points: the causation theory

Criticism toward the Ex nihilo, nihil fit maxim also has a Humean taste: the proposition bears a double meaning, namely that

  • Nothingness cannot be a cause
  • Something cannot exist without a cause

From the argument for which “Nothingness cannot be a cause” we cannot logically deduce, as Descartes does, that “Everything must have a cause”; at the same time, if from the “Something cannot exist without a cause” point it follows that everything must have a cause, that argument is not supported by the first premiss, i.e. that “a cause must precontain the reality of its effect”.

To recap:
  • “A cause must precontain the reality of its effect” -> (valid) “Nothingness cannot be a cause” -> (invalid) “Everything must have a cause”
  • “A cause must precontain the reality of its effect” -> (invalid) “Nothingness cannot be a cause” -> (valid) “Everything must have a cause”

What Dicker offers here to defend Descartes is a quote by E. M. Curley, an explicit challenge to Hume: “Admittedly I can conceive of something springing into existence ex nihilo. But I cannot believe that this ever happens.” (p.118) Thus, from a logical standpoint, the argument is far from being solid; it nevertheless stands upon strong sense of what ought to be believed.

 

Critical points: the Cartesian Circle

Arnould moved the following criticism in the Objections to the Meditations: that Descartes proves the existence of a non-deceiving God through the reliability of clear and distinct perception, and at the same time holds that clear and distinct perceptions are reliable because they are given by God.

A possible answer depends upon whether we could show that either clear and distinct perception either the existence of God can be established before the other.

In the analysis of what Dicker calls “The vindication-not-needed strategy”, namely that God is not necessary to insure the truth of clear and distinct perception, it emerges the renown argument by which Descartes would call God’s existence only to warrant that one’s memory of clear and distinct perceptions are not deceivable. The author presents a set of different philosophical and philological arguments to claim that the memory defence is not one of Descartes; the interpretation which has been put forward (by scholar James Van Cleve3)http://fitelson.org/epistemology/vancleve.pdf) is that Descartes would use the divine guarantee to get from the memory of a clear and distinct perception to the claim that it must be true – not to confirm the reliability of one’s memory, rather to support that that memory is in fact true. Thus, Descartes would not escape the guilt of circularity.

 

Trying to escape the Cartesian Circle

The most relevant justification for Descartes’s independence of clear and distinct of perception lies in the so-called “General rule defence”, by which some scholars argue that God is required to warrant that clear and distinct perceptions are true in general, but that there are particular perceptions, as Descartes repeatedly points out, that cannot possibly be false – statements as “I am thinking, therefore I exist” or “2+3=5” are ‘assent-compelling’.

Dicker’s thesis is that the general rule is self-defeating: if the general principle of clear and distinct perception cannot be certain before the proof of God’s existence, then that very doubt must contain the proposition that even while having a clear and distinct perception one may be wrong. “Doubting the general principle must consist in thinking ‘Even when I was having a clear and distinct perception, which admittedly I could not doubt at the time I was having it, I may nevertheless have been mistaken: the proposition that I was then clearly and distinctively perceiving may actually have been false.’” (p.131)

How could then Descartes’s position be saved? If the Cartesian doubt stems from reasons that brought him to think that there may exist an omnipotent God, who can do anything and therefore may even deceive him altogether, reason itself can, upon further inquiry, provide a basis to nullify that doubt. The initial doubt would therefore be a prima facie, and will be defeated by building through reasoning the theological argument, namely that an omnipotent God, even though can be conceived as deceitful, must reasonably be non-deceiving because of his perfection.

This point tackles the problem of radical skepticism so deeply, that I want to quote Dicker at full length on the issue: “Once we grant the legitimacy of the use of reason required to infer the possibility of our going wrong about the simplest things from the possibility that there is an omnipotent God, there is no reason in principle to deny the legitimacy of the use of reason that leads to the conclusione that the omnipotent God who actually exists is a perfect being who, while still fully able to deceive his creatures, would not wish to do so. Indeed, consistency requires that if we allow the former use of reason to be legittimate, then we must also allow the latter to be legitimate.” (p.139-140) Thus clear and distinct perceptions “are used only because they constitute the most careful use of the intellect we are capable of” (p.140): the theological argument is completed successfully and validated in absence of reasons for doubting, and it simultaneously validates reason itself.

 

The theological argument has become so significant, that we have to examine whether Descartes can really prove that such an omnipotent and infinite God exists. His response to Gassendi’s criticism is that men can understand the nature of God without necessarily grasping it, that is sufficient to be able to ‘touch’ it rather than ‘embrace’ it in order to know that he is omnipotent, benevolent and infinite. Bernard Williams4)http://philpapers.org/rec/WILDTP-6 has put forward a powerful charge: to be able to conceive that such God exists, without being able to explain how come he is so infinite etc., does not guarantee that he should be the equally-powerful formal cause of Descartes’s idea of God.

 

 

Meditation V : The Ontological Argument for the Existence of God

There is still another proof in Descartes’s toolkit for the existence of God, that is the Ontological Argument. It combines the criterion of truth and the connection between supreme perfection and existence to postulate that a supremely perfect being, i.e. God, must exist. The heart of the argument, anyway, lays in the assertion that existence is a perfection: a supremely perfect being, who cannot lack any perfection, should therefore exist.

The most powerful objection to the Ontological Argument has been stated by Kant: following his reasoning, by posing that existence is not a property as Descartes wants it to be, that existence is a perfection should consequently be rejected. Kant’s argument is that existence cannot be considered as a ‘descriptive property’, rather as a concept that is applied to something. For example, if we say that “zebras are striped”, striped assigns a particular property to its subject. By asserting that “Zebras exist” instead, we are simply stating that the term “zebras” applies to something.

However, Kant’s objection is considered to be not conclusive. Some have argued that existence can be regarded as a property of things we know exist, just as non-existence could be thought of as a property of dragons, gremlins and unicorns. Dicker reinforces Kant’s argument with Deflationism, a popular solution to the problem of negative existentials, i.e. statements like “Carnivorous cows do not exist”. Deflationism holds that negative existentials are not about their subjects, rather that the concept of a carnivorous cow is not exemplified; such statements should be considered as deceptive due to their grammatical appearance. By embracing the necessary isomorphic application of deflationism to affirmative existentials, one would strongly be committed to the Kantian objection that existence is not a descriptive property, for “a carnivorous cow” would then be non-existent because its concept is nowhere to be exemplified. Nevertheless, negative existentials such as “Dragons do not exist” are much more difficult to deal with, hence also the deflationist argument is not conclusive of Kant’s objection.

 

A further complication: Material Mode of Speech and Formal Mode of Speech

The core of another renown objection, moved by priest Catarus in the first set of Objections to Meditations, is that one cannot transpose the conceptual into material existence. The issue has been clarified further in the last century, when philosophers began to adopt two ways of talking about words: Material Mode of Speech (MMS) and Formal Mode of Speech (FMS), that is words which respectively refer to nonconceptual realities and definitions or concepts.

By applying such distinction to Descartes’s argument

  1. A supremely perfect being has all perfections
  2. Existence is a perfection
  3. Therefore, A supremely perfect being exists

and being n.1 clearly a definition, i.e. a formal mode of speech, the argument is invalid: From two formal modes of speech Descartes derives a material mode, that is the existence of something.

 

Final remarks

All things being considered from the analysis of both Meditation III and V, it emerges that Descartes has failed to carry out his theological argument. The criterion of truth should thus not be considered as guaranteed by the existence of God, rather to stand on its own. Everything that lies outside the criterion of truth, such as dualism, should therefore be considered as baseless, for it depended on the reliability of the theological argument.

 

 

Meditation VI : Dualism and the Material World

Upon the criterion of truth, Descartes builds the argument that since being able to clearly think of two entities X and Y makes them possible to exist separately, at least by God’s power, and since he can perceive his mind as independent from the body, then mind and body could exist separately. It follows that immortality is not proved as necessary, rather as merely possible.

Yet, how come, as Arnauld puts it, that if I can conceive of a triangle without thinking about his Pythagorean property, I necessarily cannot deceive myself while clearly perceiving myself as a thinking thing without extension? Descartes’s answer is that just as it is possible to conceive of a triangle without considering his Pythagorean property, but it is impossible to think of a triangle while holding at the same time that it does not have such a property; so it must also be true that if I can conceive myself as a thinking thing, it necessarily follows that I am an unextended substance, as it has been show in Meditation II.

 

The basis for the existence of the Material World is drawn directly from the thesis that there is a perfect God and that he cannot possibly deceive us; the whole argument is therefore unsuccessful, as we have previously seen from Meditations II and V. Dicker argues therefore that Descartes has not secured any certain knowledge besides the radical skepticism of the cogito of Meditation I.

Anyway, we shall know briefly outline what his general argument about the existence of external bodies is. First of all, since Descartes has sensory experiences, for the same principle followed in Meditation III, they must have a cause that has at least the same formal realty as the objective reality of those very perceptions. And since Descartes asserts that he cannot possibly be the cause of those perceptions, as they arise independently of his will, then perceptions’ causes must rely outside of him. How does then Descartes prove that those perceptions are not caused by God or by some other nonphysical identities, in other words something other than physical bodies?

“The cause of my sensory experiences cannot be God or any created substance other than bodies; for God has given me no way to spot that this is so but, instead, a very powerful inclination to believe that the experiences come from bodies (material objects). So God would be a deceiver if the experiences were produced in any other way. But since God is s supremely perfect being, he cannot be a deceiver.

Therefore, bodies exist.” (p.202)

Descartes subsequently establishes the existence of an extended substance (space-matter, the renown res extensa), of which bodies are properties, accidents.

 

The mind-body problem: Interactionism and proximate causation

Descartes repeatedly states that he can know that his mind has a special relationship with his body; specifically, he points to the pineal gland as the physical place within the brain where the interaction between the unextended substance (mind) and the extended one (body) would occur. The major problem with dualism as always been that of explaining how could an unextended and an extended property possibly interact: there cannot be any physical, mechanical, thermal event that could pass a bodily sensation to the mind, and the reverse is just as impossible.

Here it is proposed a recent development of the interactionist theory by Ducasse, who raises the notion of proximate causation to make dualism more plausible. Proximate causation is a definition of any cause-effect relationship of which no intermediary steps are identifiable to explain how the causation process works: if, for example, a specific mental event causes particular electrical level in some synapsis, there really is nothing more to it to explain precisely how that happens – it is a brute fact. Now, brute facts can be thought of ‘regularities’ – in a Humean way, we assess that in nature to some events necessarily follow other specific ones, and such occurrences cannot be further warranted other than by appealing to even more general regularities. All things being considered, mind-body interactions such as the synapsis example described above can be described as regularities – as brute facts which have no less legitimacy than monistic accounts of philosophy of mind.

 

The mind-body problem: Cartesian Dualism’s reliability

Dualism has foregone a substantial rebuttal since the advance of materialism; the author here courageously proceeds to examine whether its debacle is definitive or not: maybe Dualism stil has something to tell us. The most recent dualist version he put forward is that of Cornman; according to him, mental events can be thought to be causing neural impulses together with brain events, with the concession that mental events depend on brain stimuli themselves. Mental events would thus be irreducible to physical occurrences. Dicker argues that the Ockham Razor is not relevant, for the it should be applied to entities that hold a primary explanatory power, of which there is no independent evidence. Mental events, on the other hand, are to him not essential to explain how neural impulses occur (for they could be detected directly form the ultimate cause of brain events) and are supported by independent evidence, namely that of introspection and that of the logical conclusion that since we can conceive of the mind existing independently of the body, it follows that the mind can exist without the body and so that body and mind must differ from one another. 5)A thorough analysis of the mind-body problem with respect to the issue of consciousness can be found here: Explaining Consciousness. With respect to some methodological issues within philosophy of science, a review of Sober’s recent work Ockham’s Razors will soon be released.

In defence of Descartes’s account of a weaker dualism, I’d quote Dicker as a closing remark:

Nothing in Descartes’s case for dualism rules out such dependence of res cogitans on res extensa. At best Descartes’s arguments give a certain epistemological priority to res cogitans – show that its existence can be known before that of res extensa. But this does not mean that res cogitans has any metaphysical priority – that it can actually exist independently of  res extensa. (p.231)

 

References   [ + ]

1. https://www.researchgate.net/publication/273109869_Cogito_Ergo_Sum_Inference_or_Performance
2. The idea that the Universe must have a cause, by taking the existence of complex objects as a proof of the existence of God, will be challenged by Darwinians; the most recent and popular example has been given in The Blind Watchmaker, by biologist R. Dawkins.
3. http://fitelson.org/epistemology/vancleve.pdf
4. http://philpapers.org/rec/WILDTP-6
5. A thorough analysis of the mind-body problem with respect to the issue of consciousness can be found here: Explaining Consciousness. With respect to some methodological issues within philosophy of science, a review of Sober’s recent work Ockham’s Razors will soon be released.

The Shape of Ancient Thought

The Shape of Ancient Thought by Thomas McEvilley; book cover

 

ISBN: 978-1581152036 

READ: 2016-11-06

Author: Thomas McEvilley

 

I approached McEvilley’s tome in a full post-Modernist ride. Around four years ago, I began to look into Indian and Buddhist philosophy to find solace and inspiration – something I could not find in the western philosophies I have been taught at school. That has been my gateway into proper philosophical inquiry, however hesitant and stumbling.

The recent turn has been to pursue a degree in philosophy, which of course is exclusively western. So I was interested in looking at possible connections between the two traditions. Specifically, the single piece of information which prompted me to buy the book was the hypothesis – not a new one – that Greek skepticism was influenced by early Buddhism. Several books, more modest in their dimensions, have come up recently to deal with that particular issue; however, I wanted to have a broader picture, so I opted for McEvilley’s work.

I’m glad I did so. Concerning the diffusion hypothesis in support of Buddhist influence over Pyrrhonism, McEvilley is clear in pointing out that the main elements of the Pyrrhonist doctrine should be traced back to the Greek Skeptic tradition of the Democritean lineage, rather than to the Buddhist; the ‘suspension of judgement’ precept, for example, serves the psychological purpose of attaining tranquillity in Skeptisim, not the religious aim of escaping transmigration of Buddhism.

For McEvilley, the hypothesis is even turned upside down: a detailed inquiry into the development of dialectic in the Indian tradition suggests that it underwent an abrupt change, which could be observed in the works of Nagarjuna, an extremely important author of the Mahayana Buddhist tradition that is generally claimed responsible of a the ‘second turning of the Wheel of Dharma’. McEvilley points out that Mahayana has been developed in the Gandharian area, which had a strong Greek presence from the days of Alexander’s expedition; McEvilley parallels elements of Stoic and Epicurian logic with Madhyamika’s, but Greek traditions went through a steady development of dialectic, whereas it is hard to trace any logical argumentation of Nagarjuna to the ancient Indian tradition.1)“With Pyrrhon of Elis’s trip to India the pre-Alexandrian period ends. In that period Indian and Greek thinkers had developed similar dialectical attitudes. But only in Greece, it seems, had these attitudes equipped themselves with formal methods in this period. After Alexander’s colonisation of northwest India a five-hundred-long period of Greek and Indian cultural intermixing took place. Toward the end of this period, the array of Greek dialectical forms turns up in India, mature, complete, and without evidence of developmental stages, in the school of Buddhist thought called Madhyamika.” McEvilley, T. (2002) The Shape of Ancient Thought (Allworth Press, New York) p.416

Of the two important moments of diffusion across Greek and Indian philosophies sketched out in the book, what I have just briefly described points to a diffusion of methods. The second concerns a diffusion of contents, and it dates back to the pre-Socratic period. I knew very little about eastern influences in Greek thought, even though Greeks themselves acknowledge that much knowledge in astronomy and mathematics, for example, derived from Mesopotamia. So much for external influence – which is not preeminently subversive, as it comes from the Near East, though unbeknownst to the lay student. And although evidence begins to be gathered, Indian influence over Greek thought is even more difficult to be accepted, mainly for ethnical reasons.

And here comes the post-Modernist wave: it is destabilising for its relativisation of values, and precious for its eagerness of stripping out historical analysis of ethnical prejudices. I am all for whatever inquiry that aims at recognising an ethnically broad set of contributions to the history of mankind, at whatever may reduce the unnecessary and unjustified gaps of Western supremacy vs. Eastern subordinancy. At the same time, that research shouldn’t be done unrigorously, under pressure of post-colonial rehabilitation, for example. Yes, cultural and intellectual change occurred at a much faster pace in Greece than in India, probably for its weaker connections with religion. Still, the traditional division into “Greek=rational”, “Indian=mystical” is quite outdated, as a detailed comparison of both traditions sketches out a balanced compenetration of mystical pursuit and argumentative wit on both sides.

Let’s finally come down to the diffusion of contents: McEvilley argues that through the medium of Persia – where Greeks and Indians lived together at the cosmopolitan court of Persian emperors – Indian thought shaped Greek philosophy2)The dissemination process can be expanded as follows: “A teaching that came into Greece and India in the Bronze Age or earlier seems to have been reinforced in Greece in the sixth century by a wave of Indian input, which probably originated in the same Bronze Age source but had undergone significant reinterpretation as its context changed. The Mesopotamian doctrine, in other words, if such there was, had travelled both into Greece and India by 1000 B.C.; subsequently it remained inconspicuous and, probably, unchanging in Greece, but in India it underwent further development, associating it with convergent doctrines from the non-Vedic community, and so on; the this developed form of the doctrine in turn was disseminated from India into Greece in the sixth century B.C.” McEvilley, T. (2002) p.287 on the side of monism, substance monism, atomism, elemental transformation and probably reincarnation: “Upanisads seem to precede Parmenides in monism, and to have directly influenced Heraclitus’s view of the process of nature; Jain atomism and Carvaka materialism would seem to precede Democritus.”3)McEvilley, T. (2002) p.653

Probably the most striking parallel is of ethical order, and it concerns the precept of imperturbability (object of the last chapter of the book): Platonism and Neoplatonism, the Vedanta and most schools of Mahayana Buddhism devise a transcendentalist approach to connect with a higher Being by means of perceptual disengagement; Theravada Buddhism, Epicurianism, Skepticism and Stoicism ground imperturbability on the understanding of natural laws. Imperturbability (ataraxia) is that mental condition by which a man is indifferent and equanimous to the events of daily life; it is a virtue ethics theme, which claims that the same act performed under different mental conditions has different ethical values. Only Aristotle advocates for a full engagement in feelings and passions. It was very interesting to me to note such similarities, and I could not help but to think about how ethical directives from such different backgrounds could be compared, given that nowadays the precept of imperturbability is brought on by the revival of Stoicism, Buddhism and Zen. Owen Flanagan outlined in The Bodhisattva’s Brain how an hypothetical assessment of ethical effectiveness could proceed only after defining which type of ‘happiness’ each tradition seeks; afterwords, one can proceed to gather the ethical guidelines and see if they could match their goals. Given although that two different types of ‘happiness’ are compared, nothing much could be said about which to prefer. The surprisingly closeness of ‘imperturbability’ definitions among such different traditions made me wonder if an operational definition could be given to perform neurophysiological studies; the question of grounding morality in science is far from being resolved and highly controversial, but in this case, having a somewhat similar ethical goal, it would be possible to test different guidelines empirically against each other. The question then may be if imperturbability is even a desirable goal. On one hand, it is undeniable that humans search for mental stability, in different degrees; yet at the same time, many would argue that a life without emotions is not a life anymore.

 

McEvilley’s chapters can be read as brief stories, inasmuch how historical and anthropological his outlook is, and not necessarily in its entirety, for the main concepts are broadly repeated; the length of the volume keeps the promise of dwelling into lots of historical, philosophical, philological and artistic details. With an overtly abused label: an ‘highly recommended’ reading.

 


 

Further notes and interesting remarks:

 

  • The famous Pythagoras’s tuning experiments were not groundbreaking: basic harmonic ratios were already well-known and studied in Mesopotamia. Mesopotamian influences are much broader that one may think (the calendar, the sexagesimal system, the Processional cycle).
  • You can turn the title into a question: “What shape is ancient thought? Round.”4)McEvilley, T. (2002) p.91
  • First notions of ethics found in the Vedic emphasis on rituals: “Good is equated with the correct performance of the rite, bad with the incorrect performance.”5)McEvilley, T. (2002) p.114
  • How many times do we think of our civilisation as the most cosmopolitan and liberal of all times? Not to say that we have not made progress, but let’s not forget how long it took us to get here, and that others have attempted to do so in the past as well: “The Achaemenid state was the first world empire in history to proclaim a completely tolerant and benevolent treatment of the cultural traditions of dozens of peoples and tribes.” 6)McEvilley, T. (2002) p.125
  • The Bull of Wall Street? A symbol of growth which traces directly back to Sumerians.7)McEvilley, T. (2002) p.265
  • On the history of the concept of infinity: it arose with conscious effort in Greece with Anaxagoras, much before Indian thought came to grasp the concept.8)McEvilley, T. (2002) p.313
  • On the reported beginnings of phenomenalism in western tradition: the Democritean attempt to explain and save the relationship between human experience and the physical reality of atoms and void: “Sweet exists [only] by convention, bitter[only] by convention, color [only] by convention: in reality there is only atoms and void.” (Fr. 9) The Parmenidean emphasis on the illusory character of sensorial experience is thus safeguarded.9)McEvilley, T. (2002) p.316
  • On matters of atomism, McEvilley goes further to argue that the basics of Democritus’s philosophy could be found even earlier in India, under the teachings of Ajivika Pakhuda Kaccayana.10)McEvilley, T. (2002) p.318 The author points out that, during the described influence of Indian thought on Greeks through the medium of Persian Empire, atomism may well have been among the subjects of philosophical contamination.11)McEvilley, T. (2002) p.321
  • To challenge the common belief that Indian philosophy may be treated unitarily, as an almost coherently metaphysical thought: Carvakas held for a materialistic naturalism which made them think of “conceptions of religious ethics, such as karma, samsara, and moksha [liberazione] … as deliberate deceptions by the priestly caste which profited economically from them.”12)McEvilley, T. (2002) p.328
  • Parallels between Buddhist and Epicurean utilitarian hedonism; in Buddhism, although, the evaluation involves not only a personal account of happiness, but also the sufferings of people who are involved in the course of action that is under evaluation.13)McEvilley, T. (2002) p.338 Examples of moral progress, if we take Singer’s definition in The Expanding Circle. No wonder Buddhism has gained momentum in America over the last 50 years; as the author points out, “logical positivism, pragmatism, linguistic criticism, empiricism, and utilitarianism seem to have been characteristic of early Buddhism.”14)McEvilley, T. (2002) p.340
  • Substantial difference between contemporary ‘skeptical movements’ and ancient skepticism: “Sextus’s statement that the Skeptic “keeps on inquiring” does not mean that he is actively engaged in a search for the truth, but that he has not settled on a position.” 15)and that “The most important effect of the Skptics’s epochè is to preserve him from philosophical discussion.” McEvilley, T. (2002) p.479
  • On the origins of the Buddhist Mahayana tradition (that which is led by the Dalai Lama, to be clear), of which, as touched upon before, Nagarjuna is the ‘founder’: it “seems to have been originated in the Greco-Buddhist communities of India, through a conflation of the Greek Democritean-sophistic-skeptical tradition with the rudimentary and unformalised empirical and skeptical elements already present in Early Buddhism.” 16)McEvilley, T. (2002) p.503
  • Hints of probability as an useful epistemological tool in Epicurean logician Philodemus.17)McEvilley, T. (2002) p.512

 

 

References   [ + ]

1. “With Pyrrhon of Elis’s trip to India the pre-Alexandrian period ends. In that period Indian and Greek thinkers had developed similar dialectical attitudes. But only in Greece, it seems, had these attitudes equipped themselves with formal methods in this period. After Alexander’s colonisation of northwest India a five-hundred-long period of Greek and Indian cultural intermixing took place. Toward the end of this period, the array of Greek dialectical forms turns up in India, mature, complete, and without evidence of developmental stages, in the school of Buddhist thought called Madhyamika.” McEvilley, T. (2002) The Shape of Ancient Thought (Allworth Press, New York) p.416
2. The dissemination process can be expanded as follows: “A teaching that came into Greece and India in the Bronze Age or earlier seems to have been reinforced in Greece in the sixth century by a wave of Indian input, which probably originated in the same Bronze Age source but had undergone significant reinterpretation as its context changed. The Mesopotamian doctrine, in other words, if such there was, had travelled both into Greece and India by 1000 B.C.; subsequently it remained inconspicuous and, probably, unchanging in Greece, but in India it underwent further development, associating it with convergent doctrines from the non-Vedic community, and so on; the this developed form of the doctrine in turn was disseminated from India into Greece in the sixth century B.C.” McEvilley, T. (2002) p.287
3. McEvilley, T. (2002) p.653
4. McEvilley, T. (2002) p.91
5. McEvilley, T. (2002) p.114
6. McEvilley, T. (2002) p.125
7. McEvilley, T. (2002) p.265
8. McEvilley, T. (2002) p.313
9. McEvilley, T. (2002) p.316
10. McEvilley, T. (2002) p.318
11. McEvilley, T. (2002) p.321
12. McEvilley, T. (2002) p.328
13. McEvilley, T. (2002) p.338
14. McEvilley, T. (2002) p.340
15. and that “The most important effect of the Skptics’s epochè is to preserve him from philosophical discussion.” McEvilley, T. (2002) p.479
16. McEvilley, T. (2002) p.503
17. McEvilley, T. (2002) p.512

Micromotives and Macrobehavior

micromotives-and-macrobehavior-cover

 

ISBN: 978-0393329469

READ: 2016-09-22

AUTHOR: Thomas C. Schelling, Nobel Prize for Economic Studies in 2005

 


 

This book taught me to think a little more analytically about what happens in standard situations when individual, free choices lead to aggregate behaviours that nobody wants.
It is a classical problem of economics; Schelling warns the reader that every depicted function is nothing more than what it is – a model, an ideal scenario, free from its subtle, but no less important features. Schelling’s functions aim at “illustrating the kind of analysis that is needed, some of the phenomena to be anticipated, and some of the questions worth asking”; typically, they involve two-person scenarios as the notorious prisoner’s dilemma, or they focus on broader dynamics where the object of study is equally ‘simple’ – behavioural contrapositions between blacks and whites, rich and poor, young and old, etc.

Given that no underlying principle could be extracted from such a broad array of phenomena, Schelling lays out an interesting overview of macro-features: one is when phenomena occur in pairs as the previous dichotomies do, one when populations are guided by a principle of conservation (for example: no matter how hard you try, you can never get rid of the worst 10% employees, for the statistical feature of being ‘the worst’ is independent of the members of the system – the only solution will be to close your office), or move through a semi-closed system; Schelling then talks about complementary population sets as the two sexes or two ‘races’ (Schelling writes his papers in the Seventies, when racial issues were prominent in the US), and that “the independent variable in a system of behaviour often proves t be the sum of the dependent variables in a system”, and “people react to a totality of which they are part”.

Famous “critical mass” phenomena are just part of the family. One of the funny things about them, is that “even if one of the outcomes is unanimously chosen, we cannot infer that it is preferred from the fact that it is universally chosen.” To appreciate such an intuitive statement, graphics are an extremely useful tool, even though a little unwelcome for those who don’t like to see reality through straight lines.
S-shaped lines do help a lot, though. In “Thermostats, Lemons, and Other Families of Models” Schelling analyses how different scenarios may play out depending on which metrics we choose and act from to define phenomena. “If absolute numbers are what matter … the activity is likely to be self-sustaining in a large group but not in a small one.” Think of a typical dynamic, where people would go to that bar only if n. people are hanging out.
“If it is proportions that matter … there is the possibility of dividing or separating populations.” Speaking of language accent or fashion, for example, separating a population would have the effect of reshaping proportions, and make some behaviours more or less easily adopted.

An ethical and much actual note comes from the discussion of commons. Social contracts are sometimes blamed for not really being able to solve our problems. Schelling shows that such view is superficial, and that in the end having rules, being good or bad, is better than none.
In a classic problem of electrical overload, that half of the population which undergoes voluntary restriction may well be angry at the free riders. Nevertheless, even though free riders are better off than the other half, “the cooperative half may be better off for having found a way to make themselves cut back in unison.” Schelling here maybe suggests that ‘being better off’ may not be just to exert one’s individual choice unrestrainedly; there is a value, though less individual and thus less tangible and measurable, in the ability of acting ‘in unison’. And that ability, I’d further suggest, may well come in hand in the future.

Speaking of segregation and its dynamics, Schelling strongly opposes the view that such aggregate phenomenon has any social efficiency. Just as romance and marriage influence the aggregate genetic treasure that we’ll bring on, just as depression and inflation “do not reflect any universal desire for lower incomes or higher prices … The hearts and minds and motives and habits of millions of people who participate in a segregated society may or may not bear close correspondence with the massive results that collectively they can generate.” Now that’s a strike to social change efforts! 1)Other economists have taken this issue a little forward, and showed that the individual choice of giving up on one egg would decrease the total production by 0.91 eggs. More details could be found in Doing Good Better by William MacAskill and Expected Utility, Contributory Causation, and Vegetarianism by Gaverick Matheny.

As pointed out earlier, it is exceedingly hard to tell what’s behind individual decisions just by analysing aggregate phenomena. Certain configurations though may be mechanically produced by playing around with sets of preferences, as Schelling does in the classical simulation of neighbourhood segregation of pennies and dimes. From an initial, almost balanced disposition of dimes and pennies, assigning fairly human preferences as ‘staying with more than 3/4 of the same type, or leave’ turns the board to an equilibrium which is evidently clustered. Playing around with the coins will show that a distinction between integrative and separative ‘behaviours’ is almost impossible to draw.

Finally, a quite intriguing poke at randomness: imagine a binary division scenario with complementarity, one among the features laid down at the beginning – sex: people would be able to choose in advance the sex of one’s child. Most would likely prefer a 50-50 distribution, but individual choice may be driven by:
1) wanting a boy or a girl, while badly wanting the 50-50 population ratio;
2) wanting a child of the scarcer sex for some advantages;
3) wanting a child of the preponderant sex for equally conceivable advantages.
In such a scenario, not everybody will turn out to be happy. “The binary illustration is a vivid reminder that a good organizational remedy for severely nonoptimal individual choices is simply not to have the choice – to be victims (beneficiaries) of randomization – and thus to need no organization.”

References   [ + ]

1. Other economists have taken this issue a little forward, and showed that the individual choice of giving up on one egg would decrease the total production by 0.91 eggs. More details could be found in Doing Good Better by William MacAskill and Expected Utility, Contributory Causation, and Vegetarianism by Gaverick Matheny.

Explaining Consciousness

Explaining Consciousness - The Hard Problem; edited by Jonathan Shear

 

ISBN: 978-0262692212

READ: 2016-09-13

EDITOR: Jonathan Shear

 

Could a 1997 papers’ collection still say something meaningful about the present consciousness studies state of art? Maybe not. Neuroscientific research has grown quite fast in the last decade. On a philosophical level though, the issue seems to have changed quite little, and this book is very useful to get a general picture of what it is referred to as ‘the hard problem of consciousness’. Since many authors here question that physical sciences would ever be able to give any definitive answer to such problem, probably none of the most recent disruptive scientific discoveries – Higgs-Boson or gravitational fields – would change their minds anyway. The freshness of the present work, at least from a philosophical standpoint, became surprisingly clear to me after I saw last Chalmers’ TED Talk, given in 2014, where the same basic issues are laid out; in an even more recent episode of the Waking Up Podcast, Chalmers brings on the same arguments you’ll find here.

You would certainly recognise how often I disagree (in the footnotes) with authors who propose alternative views to the traditional explanation of consciousness, which roots it in neurophysiological workings. I am particularly fond of the last work of Sean Carroll, whose wit is especially precious in striking out philosophical zombies and downward causations. Nevertheless, as Chalmers’ TED Talk and Sam Harris’ enthusiasm reminded me, these objections are far from being defeated. My attempt with the present review is to present the arguments of the authors in the most neutral way, to be faithful with their explanation and leave any personal disagreement in the footnotes, so that you could judge by yourself how ungrounded, I believe, many of the presented arguments are. I’ve been a little disappointed to find out that the collection presented by Shear is not well balanced. Papers’ majority falls in the category of those who sympathise with Chalmers’ position; skeptics are in minority, maybe because of Shear’s selection, maybe because skeptics themselves deemed such argumentation not worthy of being engaged with in the first place. Another good reason to listen to the fringe is that science is becoming increasingly conservative for the way it is structured. That is a good thing, except for the fact that discoveries are much more rare and expensive. Stretching a little these constraints is surely a good creative practice, and strengthens one’s critical abilities as well.

The present review mimics the structure of the book. To Chalmers’ keynote article, follow 26 different papers, all of them being previously published in the Journal of Consciousness Studies, closed by a Chalmers’ paper in response. From Dennett to Price, a book section called “Deflationary Perspectives“, the general approach would be that of reducing the hard problem to something that physical sciences would be able to fully figure out. From McGinn to Robinson, “The Explanatory Gap” section, authors remark that the hard problem is really hard – some, that it is even insoluble. From Clarke to Bilodueau, the “Physics” section, quantum mechanics perspectives are used to shed light on the problem. From Crick and Koch to MacLennan, the “Neuroscience and Cognitive Science” section, authors tackle hypotheses that link consciousness with cognitive sciences. From Seager to Hut and Shepard, the “Rethinking Nature” section, is explored the possibility of defining consciousness as a general feature of the universe. From Velmans to Shear, in the last “First-Person Perspectives” section, it is argued that a new science of subjective phenomena is needed. The symposium is summarised in Chalmers’ response paper.

Footnotes, although labeled as ‘References’, are personal comments or useful integrations for a deeper understanding.

Here you can download the ebook version of this review.

 


Facing Up to the Problem of Consciousness – by D. Chalmers

Chalmers starts off by making a distinction among consciousness studies between easy problems and the hard problem.
Easy problems are those explanations we still haven’t figured out about functions and abilities of consciousness. The hard problem is the impossibility to reduce “what it feels like to be a human” to mere physical, functional terms. Chalmers refers to such feeling as experience, others as phenomenal experience or qualia. He states that experience must be caused by something more than anything we would be able to detect at a neurophysiological level. He further argues that as functionally identical organisms could be conceptually conceived as having experience or not, then we must look for a deeper explanation of experience beyond brain functionalities.

Chalmers claims that an extra ingredient is needed. He suggests to take experience as a fundamental, irreducible ingredient of any consciousness theory; he calls it naturalistic dualism, as, in his opinion, there is no contrast with established laws of psychics because he simply adds “further bridging principles to explain how experience arises from physical processes” (p.20, original italics). Thus he names it a nonreductive theory of consciousness.

Chalmers constructs his theory upon the following three principles, the first and the second of those are less fundamental (and controversial) than the third:

  1. The principle of structural coherence, which states that processes of consciousness (characterised by Chalmers as the phenomenon of experience) and awareness (the ‘easy problems’ stuff) are structurally coherent.
  2. The principle of organisational invariance, which states that “what matters for the emergence of experience is not the specific physical makeup of a system, but the abstract pattern of causal interaction between its components” (p.25).
  3. The double-aspect theory of information. This is how the argument goes: since “the differences between phenomenal states have a structure that corresponds directly to the differences embedded in physical processes” (p.26), then physical processing and conscious experience share some properties, and a double-aspect of information itself can therefore be inferred.

 

Facing Backwards on the Problem of Consciousness – by D. Dennett

Dennett makes some parallels with the old issue of vitalism to point out how those functions that Chalmers judged insufficient to explain the subjectivity of qualia are in fact responsible for them wondering about “how consciousness could possible reside in a brain” (p.35): he further adds that without those functions, there would be nothing left to wonder about.

Dennett leaves Chalmers with the burden of finding an independent ground – as physicists have done to account for the introduction of fundamentals such as mass, charge and space-time – to support his claim that information should arise to the same ontological realm and considered to be a fundamental property of the universe.

 

The Hornswoggle Problem – by P. Churchland

Church land sets out to prove how Chalmers has carved up a problem space that wasn’t actually there. She asks rhetorically: “What exactly is the evidence that we could explain all the ‘easy’ phenomena and still not understand the neural mechanisms for consciousness?” (p.38) She places the main proof onto the zombie thought-experiment, the notion that is conceptually possible for a perfectly functioning entity not to experience anything. She remarks that “saying something is possible does not thereby guarantee it is a possibility”. She further adds how the demarcation between easy problems and the hard problem might be way less defined than Chalmers lays it down.

She brings back the issue at how little we actually know about how to figure out the supposedly easy problems, and calls for an argumentum ad ignorantiam, as Chalmers makes out of the incompleteness of our state of understanding a metaphysical ground for his argumentation in favour of a new fundamental property of consciousness. “The mysteriousness of a problem is not a fact about the problem, it is not a metaphysical feature of the universe – it is an epistemological fact about us.” (p.42)

The final thrust to Chalmers’ essential distinction is made in the light of the history of science: more often than never, the tractability of problems might be misplaced. Back in the Fifties, people believed it would have been much easier to figure out the folding of proteins than the copying processes. The story turned out to be different. Churchland closes with the following: “When not much is known about a topic, don’t take terribly seriously someone else’s heartfelt conviction about what problems are scientifically tractable. Learn the science, do the science, and see what happens”. (p.43)

 

Function and Phenomenology: Closing the Explanatory Gap – by T. Clark

The argument Clark makes is much about how scientific theories should be developed, and how much Chalmers’ theory doesn’t fit those requirements. In particular, if science is the practice of bayesianly incorporating the explained phenomena into an existing theoretical framework, it has to be done with the minimal changes to the original framework. Chalmers’ theory poses the facto a dualistic reality, which has been ruled out by physical inquiry.
Another fundamental aspect of scientific procedures is that by the rules of falsifiability, the burden of proof relies on those who try to add something to any theory. Furthermore, we generally “shouldn’t posit as fundamental that which we are seeking to explain.” (p.47) Chalmers avoids to present proofs in favour of the argument of why the dual-aspect of information should be established as a fundamental feature of reality, and places experience as necessary to account for the same existence of conscious phenomena.

Clark further expands upon why Chalmers even falls prey to such machineries in the first place. One of the reasons in that Cartesianism still lingers quite strongly, albeit it has evidently faded in the last century or so. Another is related to a sort of anthropocentric bias, whereas qualitative experiences are linked to complex organisms like us and no place is left for those who don’t show the ‘right’ characteristics to be candidates for experiencing consciousness. The point is, we don’t know well enough what is responsible for those experiences to rule out a priori who has them and who doesn’t. Clark further proceeds to say that qualia may simple be some aspects of specific kinds of functional organisations. The third point the author makes in explanation of the special role of consciousness is related to the second: humans like to think at themselves as unique mainly because of their conscious functions, particularly rational agency. The root fear is that there will be nothing left for free will after we’ll accept consciousness as mere physical process. Thus, the creation of such dualism to defend our purported specialness as human beings. Clark doesn’t address such emotional concern.

The author interestingly turns the problem of the ineffability of qualia upside down: since “as subjects we are constituted by and identical to cognitive processes which themselves instantiate qualia [the identity hypothesis], qualia are what it is for us to be these processes” (p.51, original italics). The hypothesis stems as the most reasonable one in consequence of the previously description of how scientifically theories must proceed. “The ineffability of qualia, among their other properties, is thus a consequence of and explained by the functional identity hypothesis.” (p.51)
Ineffability of subjective experience as proof of functional identity is taken even further, when Clark says that such opaqueness “could be a clue to their [qualia] not having a determinate intrinsic nature.” (p.55) Being qualia such a basic function of who we are, how could we possibly develop a perspective on them? Such perspective rules out, counter-intuitively, that we are actually having a first person point of view of our experience. And the fact of not being able to speak about ‘that’ subjective experience exactly as it is uniquely felt by the person, is itself a proof of its inconsistency as having an intrinsic nature.
Chalmers’ second principle is even used to support the very notion of the identity hypothesis: organisational invariance is indeed a quite strong argument in favour of a close relationship between qualities and functions.

Clark concludes by pointing at how intrinsic, essential approaches of consciousness would structurally be resistant to any scientific, functional inquiry, a they are built to escape any objective definition. He places the last challenge of science to be the defeat of that resistant intuition, which places subjectivity as an ‘ontologically separate world’, as it emerges so strongly in our everyday experience as such a peculiar and strangely, uniquely different from everything we see around us.

 

The Why of Consciousness: A Non-Issue for Materialists – by V. Hardcastle

Hardcastle position could be summoned as follows: “pointing out the relevant brain activity conjoined with explaining the structure of experience and some functional story about what being conscious buys us biologically would be a complete theory of consciousness.” (p.62)
She poses Chalmers’ move of placing consciousness as a ’brute fact about the world’ to be wrong for the following reasons: brute facts are necessarily basic, and biological facts have all shown to be dependent upon even more basic physical principles. Taking consciousness out of the biological realm seems to overcome such objection, but then is left with nothing to be supported with, and it then fails to account as an ontologically new category. She further argues against the supposedly phenomenal nature of information – one of its double-aspects according to Chalmers – as since Freud we know that much of our information processing is unconscious.

Hardcastle shows how the matter of the controversy might be genuinely doctrinaire: just as some won’t accept any descriptive, functional explanation about the motives of the wateriness of water and the aliveness of life, any identity statements about the nature of consciousness are going to fall short as well.
She then readily points out how the choice of opting in or out the scientific game is much a matter of accepting its rules; she defends the materialistic approach as genuinely coherent with the current scientific model, and defies to provide any further argument to those who antecedently chose not to get along with the game as it is set up.

 

There Is No Hard Problem of Consciousness  – by K. O’Hara and T. Scutt

In this paper, authors discuss both methodological and philosophical reasons to ignore the hard problem of consciousness as proposed by Chalmers, and distancing themselves from a mere eliminativism.

The methodological reasons go like this: since the hard problem of consciousness is far from being defined, since it lacks any basic idea of how an approach to its solution should be carried on, we should focus on those things – the easy problems, at least in principle – that would provide us with further understanding about the nature of consciousness. This is the pragmatic take on the problem.

Since methodological reasons alone won’t suffice a complete argumentation for setting the hard problem aside, authors provide philosophical, a priori instances as well. The first of them is a context argument: even though exploring all the easy problems might not lead to a solution for hard problem, we can not nevertheless decide it in advance. Furthermore, as history of science has repeatedly shown, advances in any field change the very way we come to understand it. It might very well be that easy problems’ solutions would adjust our whole understanding of what consciousness is, hence shifting how the current framing of the problem.
The second philosophical argument is an epistemological one, and goes as follows: we cannot pretend to even understand an eventual solution to the hard problem, until we map an established, well-understood concept of what consciousness is in the first place. It is widely acknowledged, Chalmers included, that studies of consciousness are quite ripe. Just as Democritus and Lucretius shot in the dark and correctly guessed the atomic nature of reality, the argument goes, we could be doing the same in the field of consciousness, but could never prove why that theory is right. In the economy of research, then, previously outlined methodological reasons acquire new strength from such reasoning.
O’Hara and Scutt then examine supposed attempts to solve the hard problem (Edelman 1992, Crick 1994), mainly bringing them back as covert attempt to use easy problems’ solution for the hard problem one.

The charge of eliminativism is strong, and both authors acknowledge that. They rebut it by offering a practical use of the concept of consciousness that would make it precious, if not ultimately defined, for the advancement of research. They outline the field of anaesthesiology, where if we would deny the very phenomenon of consciousness, unsolvable problems would arise.

 

Should We Expect to Feel as if We Understand Consciousness? – by M. Price

The cornerstone of this paper is to question the assumption that the explanatory gap between the objective description of the brain and the subjective experience is somehow problematic. He does that in three different ways.

First, Price argues, explanatory gaps are far from being rare in our explanatory account of causal relationships. He poses the issue in the following terms: every-day causal inferences are what allow us to feel a sense of understanding when we recognise any event A to be directly responsible for event B. It is a natural tendency, indispensable for our survivorship, but it may be not the entire story. In fact, a philosophical extreme counterpoint could traced back to David Hume and his Regularity Conception of causality.

In a nutshell, “The idea of a causal nexus is in principle non-sensical because ground A and outcome B cannot at the same time be different from one another and account for each other. […] Causation ‘as it really is’ consists just of regularities in the relationships between states of affairs in the world.” (p.85, original italics) The idea is that there is no causal relationship between objects at a fundamental level, and that sounds much of what quantum mechanics is telling us. Popper (1980) insisted upon the fact that causation as such in not a necessary condition of scientific inquiry, for its usefulness is established in its ability to distinguish between accidental regularities and consistent ones, the latter ending up to be labeled as laws.

This is how second and the third parts of Price’s paper are intertwined: If our innate ability to cover up those major gaps helps to define the consciousness gap as a particularly problematic one, for it somehow escapes from our powerful pattern-recognition habits (2nd), it is useful to understand how we usually explain causation in a psychological way (3rd).

Price stands heavily on a paper by E. Rosch (1994), who got to the brilliant intuition that “explanations that derive events from something other than themselves only come to feel like explanations because somewhere along the line they surreptitiously accomplish the trick of introducing the outcome itself.” (p.87) How is a logically circular explanation of causality tricked into a psychologically sound one?

1. Transfer of a property from ground to outcome, i.e. the transferability of the property of motion. Such trick doesn’t apply to consciousness, as its subjective nature – the supposed emergence – could never be mistaken with the properties of the ground, i.e. brains.

2. Perceiving an object or intending an action: we are naturally led to believe that our perceptions are similar to the object of such perceptions, as much as any action could be ascribable to some underlying intention. That is not the case with the particular matter of consciousness, whereas the issue is not to understand how these relationships work, rather to get to know how consciousness itself comes about as a subjective experience.

3. Seeing grounds and outcomes as the same entity, but transformed in some way. Here lies a strong objection against the identity thesis: “Leibniz’s Law of the Identity of Indiscernables requires that all properties of identical entities are shared, whereas the crux of the mind-brain problem is exactly that the physically described pain does not share the crucial property of first person subjectivity.” (p.89) This clashes with Clark’s take of why we can’t even produce a first person perspective of our experience. “It is all very well to think of consciousness and its ground as the same thing viewed from differing perspectives,” Price’s argument goes, “but this merely begs the question of how such radically differing perspectives can come about.” (p.89)

4. Seeing an outcome as a property of a category to which the ground belongs, or the general theory of panpsychism, a peculiar version of which is Chalmers’ phenomenal aspect of information. This is the position of those who see consciousness as a property of all things, which makes much easier to close the mind-body gap. The fact is, panpsychism as a very weak appeal in current scientific consensus – if none.
Price adds two further concepts to clarify what may additionally lead to an ‘obscuration’ of explanatory gaps: similarity and familiarity. These are two well-known mechanisms that help us get that feeling of understanding, even though a logical explanation may not be provided. “If similarity and familiarity help to obscure explanatory gaps, then when we encounter unfamiliar relationships which are also unlike anything else in our experience, […] explanatory gaps will be particularly obvious. […] Consciousness might find itself in a similar boat.” (p.91)

So, what is Price argument for? That we shouldn’t trust too much in a feeling of understanding to tell whether we are getting closer to solve the hard problem. He wants us to fully acknowledge its psychology, and keep in mind that what we perceive is not only a matter of what’s ‘out there’.

 

Consciousness and Space – by C. McGinn

McGinn goes back to Descartes’ body-mind understanding to defend the following position: conceiving of mind as a non-spatial entity well suits our ordinary understanding of mental phenomena. It is hard to deny, in his opinion, that we can’t help but evaluate conscious states as unperceivable, for we are unable to characterise them with any sort of spatial location, by means of external observation. Experiences wouldn’t even be detectable by eyes sharper than ours: such is our condition so that we can safely say that ‘consciousness is not a thing’.

McGinn acknowledges the fact that we do conceive of our brain as the main location of thoughts, and that we don’t think of ourselves as being anywhere else other than in our body. Still, he manages to assert that such point doesn’t “go very far in undermining the intrinsic non-spatiality of the mental.” (p.99) He further argues that our locating phenomenal events in space is a ‘sort of courtesy’, and the fact that we do so in such an approximate way to be a mark of their intrinsic non-spatial nature. Mentioning the notion of solidity, McGinn maintains that mental phenomena violate the principle, as they aren’t in competition for space. 1)This supposition is based on the ground that mental things somehow don’t have spatial properties. It seems to me that the simple fact of not being able to produce two contemporary thoughts should tell us enough about the issue, since today’s scientific framework takes space and time to be radically unified. The very concept of mental causation, could be supported only in the absence of a non-spatial conception of mental phenomena.

McGinn tries to account for the non-spatial nature of consciousness in the following terms: since physical properties of the universe came about with the big bang, since it is conceivable that something before the big bang actually existed and it somehow, exceptionally gave birth to our universe, it is possible that this very pre-spatial reality level is responsible for the non-spatial nature of consciousness.

Thus, McGinn argues, since “the brain cannot have merely the spatial properties recognised in current physical science” (p.103), we need to rethink about the very nature of space as we are accustomed to believe it is.  Being consciousness such a spatial anomaly, which is not possible to locate in any way, there must be some other properties about space which we don’t yet know, the argument goes. 2)In support of such claim, he posits that reductionists views in the relation between scientific fields should be rejected, as certain problems are not transferrable between them. Take for example, McGinn says, how “grotesque to claim that the problem of how the dinosaurs became extinct shows any inadequacy in the basic laws of physics!” (p.104). He couldn’t better predict the theme of last Lisa Randall’s bestseller!

We may even not be in the position to discover such new properties, for genuine epistemological reasons. “We represent the mental by relying upon our folk theory of space because that theory lies at the root of our being able to represent at all – not because the mental itself has a nature that craves such a mode of representation.” (p.107) Furthermore, “To represent consciousness as it is in itself – neat, as it were – we would need to let go of the spatial skeleton of our thought. […] So there is no real prospect of our achieving a spatially nonderivative style of thought about consciousness.” (107)

It seems to McGinn that grasping such claimed additional meta-spatial nature of space is beyond our possibilities. This is how the paper ends.

 

Giving Up on the Hard Problem of Consciousness – E. Mills

Mills frames Chalmers’ problem as “the problem of providing a non-causal explanation of the production of consciousness by physical processes”. (p.110) Therefore, the aim of a theory of consciousness should be to explain why ‘the causes of experience have the effect they do’.

First type of such theories would be a deeper causal explanation of why mental phenomena arise. Still, these very explanations wouldn’t be able to answer why such features would produce consciousness. The second type is slightly different, for it tries to detect some sort of physical mechanism that would give birth to mental phenomena. But it still is a mechanism. Therefore, the hard problem is insoluble.

Mills endorses Chalmers’ move of making experience a fundamental feature and proceeding to build a theory upon it, as “fundamental entities can interact in lawful ways, and a theory which states these laws an be both true and useful.” (p.111) 3)This statement holds true for fundamental entities that have demonstrated, strict relationships with other fundamental entities, such as physical particles. However, consciousness doesn’t have this status, as there is nothing else that supports its supposed fundamental nature. Nevertheless, Mills attacks Chalmers’ double-aspect principle as problematic, as if it has unrestricted applicability, then everything, even a pin, would be conscious; furthermore, if it is somehow restricted, why it is so still remains unanswered. Anyway, the argument goes, the double-aspect principle “still merely asserts that informational states correspond to phenomenal ones. It still says nothing about why these correspondences hold.” (p.113)

The fact that we won’t be able to solve the hard problem, shouldn’t annoy us. Just as Newton was charged of mysticism for not being able to qualify gravity, we may very well be content with the insolubility of the problem. 4)This understates recent advancements in physics toward the confirmation of gravitons, and contrasts with the spirit of science that would always look for even more fundamental underlying laws. “We inherit from Hume the view that once we have reached fundamental laws governing empirical phenomena, there is no further explaining why these laws should be true”. (p.115) We should apply the same reasoning to consciousness as well. 5)I see this last move as an alternative way of placing consciousness in the realm of fundamental entities, just as Chalmers does.

 

There Are No Easy Problems of Consciousness – by E. J. Lowe

Lowe reproaches Chalmers’ for giving too much credit to physicalism. In his view, there are no ‘easy’ problems of consciousness, as the use of supposed merely functional activities – such as ‘discrimination’, ‘control’ and ‘report’ – is both aimed at describing conscious activities like those performed by humans as well as machine behaviours. Lowe points out that there is no reason to believe that such analogies are correct, employing the pathetic fallacy 6)https://en.wikipedia.org/wiki/Pathetic_fallacy as defence.

Lowe further upholds the kantian ‘Thoughts without content are empty, intuitions without concepts are blind’ to support the subtlety of the relationship between conceptual content and perceptual experience. It follows that ascribing genuine thoughts, complete of an articulated conceptual structure, to machines, which are essentially characterised by a lack of qualitative experience, is a rather weak argument in favour of a clear-cut demarcation between ‘easy’ and ‘hard’ problems of consciousness.

Lowe takes apart also Chalmers’ characterisation of human cognition based on Shannonian notion of information. It is fully appropriate to describe “the activities of computing machines, but is wholly inappropriate for characterising the cognitive states – beliefs, thoughts and judgements – of human beings.” (p.119) Why so? The difference relies in the properties of the informational state as described by Chalmers, as it seems to be missing any ‘conceptually articulated content’, as opposed to thoughts, beliefs and judgements of human nature. Lowe suggests an example to clarify is point: take a trunk tree section and consider it as an ‘informational state‘ of the tree. Clearly, the argument goes, even though we can infer from the number of the rings some information as age and the like, the ring pattern itself doesn’t embody the concepts of time and number. Concepts are to be ascertained only to human beings, as them and only them tell the age of the tree precisely because they hold the concepts of time and number.

Proceeding on the same line of thought, Lowe considers Chalmers’ definition of awareness suitable to computers, not to humans. As he pointed out earlier, such a physicalist account of functions related to the ‘easy problems’ doesn’t fully explain its complexity, and the computer-human analogy for that part falls prey to this ingenuity.

I found difficult to meaningfully synthesise how Lowe supports his point, and I therefore report a full quote: “If by by ‘producing a report on internal states’ Chalmers just means generating a second-order informational state (in the Shannonian sense of ‘information’), then although this is something which can indeed be perfectly well explained in a mechanistic way, it is not the sort of thing than needs to be explained when we are talking about the ability of human subjects to express in words their knowledge of the contents of their own thoughts and experiences – for such an ability demands the possession of genuine concepts, not only concepts of the things those thoughts and experiences are about but also the very concepts of thought and experience themselves. And the truth is that we have not the slightest reason to believe that a ‘mechanistic’ explanation is available, even in principle, for the capacity of creatures like ourselves to deploy the concepts of thought and experience and to ascribe the possession of such concepts to ourselves.” (p.120, original italics) The underlying argument is that the meta-ability to think about our experiences and thoughts as concepts themselves, is an insurmountable obstacle to their reduction at mere functional process.

If function is so relevantly characterised by produced behaviours, Lowe deduces that behaviours that are mechanistically characterised will be very well suitable to mechanistic explanations; the point is, behaviours would well be seen otherwise, if only their conceptual structure as previously described wouldn’t be overlooked. “Once we appreciate the Kantian point that genuine thought, with real conceptual content, is only available to creatures with a capacity for perceptual experiences bearing not only intentional but also phenomenal content, we see that the sort of phenomenal consciousness which we humans enjoy but which computers and trees do not, far from being an epiphenomenon of ‘information-processing’ in our brains, is an absolutely indispensable element in our cognitive make-up, without which we could not properly be described as thinking being at all.” (p.121)

Lowe’s conclusion is far from producing any positive argument in favour of physical reductionism: it would not be able to tell anything meaningful about the ‘easy’ problems – in fact, it would not have the tools to say anything about any aspect of consciousness.

 

The Easy Problems Ain’t So Easy – D. Hodgson

Hodgson advances the hypothesis that certain functions have a causal role in producing the experience of consciousness, although they behave so that objective sciences will never be able to detect them. It follows that so-called easy problems could not be addressed without answering the hard problem.

Hodgson delivers his argumentation by first analysing the inefficacy of the scientific method to explain human free will, given that choices are “selected by the person or agent for non-conclusive reasons.” (p.126) Non-conclusive reasoning is a type of inductive reasoning, which is rationally compelling but lacks the ground of deductive logic. It is characterised by its defeasibility, or, it is vulnerable to generalisation errors. 7)https://en.wikipedia.org/wiki/Defeasible_reasoning; Cruz, Joseph (1999) Contemporary Theories of Knowledge (Studies in Epistemology and Cognitive Theory), p.36 Hodgson further explains our commonsense understanding of choice as not purely characterised by neither pre-deterministic laws, neither random laws ‘within pre-determined probability parameters’; “each choice is a unique efficacious event, in which non-conclusive reasons are resolved by a decision. And if that is right, then conscious experiences have a role which can’t be fully replicated or simulated by mechanisms which simply operate in accordance with universal laws, with or without randomness.” (pp.126-127) Such is Hodgson’s functional – although non-detectible – explanation of conscious phenomena.

Hodgson supports the previous argument with the following interesting observations:

  1. If all mind/brain functions can be explained with the detection of specific performing mechanisms, then there would be no functional explanation of consciousness, for the impersonal, lawful development of any system would exclude the possibility of choice, and therefore of any efficacious subjectivity role.
  2. Conscious and non-conscious systems have differences that cannot account for a merely functional explanation of human reasoning and of specific sensations, such as pain or colour perception.
  3. The evolutionary preference of (fallible) consciousness over the enormous computing powers of unconscious brains in the process of facing new situations.

Correspondingly, Hodgson acknowledges that to suggest something else than universal laws could be seen as appeal to superstition, that cognitive sciences are advancing and that new discoveries will be made, and that psychological experiments have illustrated how fallible human commonsense reasoning is. On this last point, the author notices that even though commonsense has some fallacies, it is nevertheless the ground of our understanding, and that we can’t just jettison it.

Hodgson illustrates the explanatory power of non-conclusive reasoning with the following example: that pain sensation has an irreducible role, because it leaves us the option of responding in different ways, and doesn’t trigger any automatic responses. If we then are able to “treat both the pain and the opposing considerations as non-conclusive […] choices are not just the working out of mechanisms obeying universal laws or computational rules.” (p.129)

Hodgson finally integrates Chalmers’ proposal in the following way:

  • If experience is fundamental, then it also is the subject and his related feature of choice.
  • If choice is not completely reducible to a physical explanation of the universe, then any eventual bridging law between mental and physical realms would never be fully determinative.
  • If “neural isomorphs are possible, they could develop differently, not just because of possible randomness but also […] because they may choose differently between alternatives left open by the systems and applicable universal laws.” (p.131, original italics)
  • In a double-aspect manner, the brain/mind should be considered as as physical/mental whole, where none of its physical or mental aspects could fully account for its whole functioning.

 

Facing Ourselves: Incorrigibility and the Mind-Body Problem – by R. Warner

Warner supports his view of the inefficacy of physical sciences to account for the rise of mental states by introducing the concept of qualified incorrigibility.

Warner attacks the validity that science ‘of the sort we now accept’ could fully describe the mind, for such qualification has no explanatory powers. He contests that even by extending our current fundamental understanding of science – a practice oftentimes used to explain phenomena that couldn’t fit in previous theories – by adding some psycho-physical principles, we would not reach a satisfactory explanation of mental phenomena, as those principles would either violate physical conservation laws or not be able to produce any physical effect at all.

At this point, it seems that non-reductive theories of the type Chalmers proposed are definitely defied. Warner proposes instead “to abandon the assumption that psycho-physical principles will not interfere with physical laws”, but instead to “look for new conservation laws.” (p.138) The burden of proof for such an overwhelming need would rely on the idea of incorrigibility.

Science’s drive is to “correct all distorting influences that make the world appear to be different than it really is” (p.140), so that every item would become a mind-independent one. Here is where the role of incorrigibility comes into play. The traditional conception of incorrigibility is: “for at least some mental states, necessarily, if one believes that one is in that state, the one really is.” (p.140, original italics) Warner proceeds to qualify incorrigibility by noticing that in order to be valid, incorrigibility has to be unimpaired (sensation is not influenced by drugs, anxiety, lack of attention, hypnosis), 8)A detailed explanation of what impairment is and what is not, as it is so crucial to the point, would be much needed, but the author only provided references of his 1986, 1989, 1992, 1993 and 1994 works. non-inferential (the belief of being in pain, for example, arises simultaneously with the sensation; is not a matter of inference) and non-causally or non-nomologically necessary (as, once the sensation is recognised as being non-inferential and unimpaired, “there is nothing more to do to ensure that [that sensation] is true.” (p.143, original italics) ). In relation to unconscious impairments, Warner further adds in a rather Buddhist fashion that they could be “fairly pervasive and that truthful self-consciousness can be an achievement greatly to be prized.” (p.143)

What is the consequence of qualified incorrigibility? It guarantees that some events could be fully recognised by subjective inquiry in a truthful manner. This provides Warner with the argument for including within physical sciences ‘an account of mind-dependent items’. Warner contests Chalmers’ view of reportability as an ‘easy problem of consciousness’, as it precisely describes the incorrigibility issue. It would be incorrigibility that “captures the ‘subjectivity of experience’. […] Many kinds of mental states enjoy qualified incorrigibility – including beliefs, desires, emotions. […] Experience simply happens to be the domain of the most convincing examples of incorrigibility.” (p.145)

Warner closes his paper by arguing that embracing incorrigibility is necessary, and that attempting to “construct a picture of the mind in mind-independent terms is to erase the mental from the picture altogether.” (p.145) Warner defends the subjective, first-person as necessary, as any free will deliberation is not a consequence of external observation of one’s intentions, rather a direct knowledge of the necessity of qualified incorrigible recognition.

 

The Hardness of the Hard Problem – by W. Robinson

Given that the Hard Problem could be formulated as follows:

(HP) Why should a subject S have a conscious experience of type F whenever S has a neural event of type G? (p.149),

Robinson proceeds to briefly examine some reactions to the (HP), such as the eliminatisivt, the functional – to which he answers that identifying any regularities between conscious experiences and neural events would leave the question open to why such regularities even exist – and the scientifically faithful – to which he rebuts that if no scientific progress about (HP) has been done in the last 300 years, then it may be very well be the case that current scientific methods are not suitable for giving an answer to (HP).

Robinson advances the hypothesis that our current scientific framework can’t answer (HP), and that we must understand why that is a fact and ‘remove the sting’ from the issue. He proceeds to examine two key components of his argument:

(F1) Explanations of regularities between occurrences of two kinds proceed by deriving a matching between the structures of those occurrences. (p.151)

He offers the following example to make the point: to be valid, the explanation of why H2O is liquid at T temperature needs not only the property description of H2O molecules at T temperature, but also an implicit premise that would characterise liquidity with some structural properties – like conforming to shapes but not volumes, etc. Robinson extracts a corollary to (F1) which is

(F2) If a regularity involving a given kind is to be explained, the kind must be expressible as a structure,

from which he derives a stronger claim, that

(F3) Whenever we can find a structure in a kind of occurrence, there is hope of finding an explanation of it. (p.152)

This is the form that all functional explanations take more or less for granted. The second fact that would be needed to account for the hardness of (HP) is the following:

(F4) Among the properties of conscious experiences, there is always at least one that has no structural expression. (p.153)

Robinson offers the examples of painful sensations, which could not be necessarily described with some sort of regularity in intensity or spatial pervasiveness, and would therefore maintain some intrinsic, although non-structural properties. An interesting rebuttal is provided to the held belief that knowing more about the relational properties of the phenomenal realm would tend the total unexplained toward zero: “In all examples of which I am aware, in which we find structure within the phenomenal realm and then explain it, terms are required for the explanatory relations that are themselves properties of phenomenal (=conscious) experiences. Thus, each case provides no net shrinkage of the amount that needs to be explained, and therefore there is no reason to suppose there will be convergence to zero.” (p.153)

Any functional explanation would then chase after the next structure to explain the previous, as (F3) suggests, but it ends up introducing new elements that need further functional explanations. It follows that “the fact that each explanation of one property re-introduces the Hard Problem for another property ought to convince us that (F4) is indeed correct.” (p.155)

Conclusion: the Hard Problem may need a shift in the very way we produce explanations to be satisfactorily solved. Robinson argues that because “the conditions for finding intellectual satisfaction are contingent,” they are in principle changeable. What in particular should be changed about our understanding, is that we are currently unable to accept ‘structureless properties’ as a fundamental feature. Our current conceptual framework doesn’t provide any solution to the hardness of the Hard Problem. 9)I omitted to report how the author deals with the two major objections to his argument, namely that his account doesn’t ‘invoke the subjectivity of conscious experiences’ and that physical perceptions per se could not have conscious experiences. Robinson answers the first objection by firsthand designate consciousness as one among the properties of conscious experiences – remember the example of pain – then by rejecting relational theories [for X to be a conscious X is for it to stand in some relation, R, to some other thing, Y] and therefore characterising the consciousness of conscious experiences as intrinsic and essential, i.e. “that they cannot exist without being conscious.” (p.157) He the proceeds to answer the second objection by sectioning colour perception into the molecular properties of the colour (colourO), and the ‘conscious experience that goes together with the perception’ of colourO things by a subject S, named colourC(S). At that point, subjects would necessarily learn to label colourC(S) experiences to colourO objects by some causal neural paths, which would be named colourN(S). Robinson claims that only colourC(S) properties are those which are conscious.

 

The Nonlocality of Mind – by C.J.S. Clark

Clark’s general abstract is that mind could not be found in any spatial location, not even those higher dimensional spaces that mathematics allow. He starts off by embracing the Cartesian view that the existence of mind is axiomatic, and that we should study the mind first and foremost by an experiential perspective. “It seems unnatural to derive mind from physics, because this would be to try to explain something obvious and immediate (mind) from something (physics) that is an indirect construction of mind.” (p.166) His approach is then to rewrite physical explanations to suit the privileged experience dictated by the mind.

While characterising which conscious states should be considered as spatial and which shouldn’t, Clark posits that all percepts belong to the first category, while all other thoughts reside in the latter. He further proceeds to clarify how the compresence of spatial and nonspatial thoughts give rise to various degrees of blurred spatial definitions of mind: that a thought of a distant star appears to stay together with the thought of a nearby car, when in fact those relations happen independently of an Euclidean conception of space. A spatial connotation of the mental would therefore be misleading.

Common objections to nonlocality described by the author are:

  1. that special relativity should define mental events as spatially characterised, for it is possible to say that mental event A is followed by mental event B. Clark although considers the argument invalid, for it begs the question by assuming that special relativity should be applicable to mental events;
  2. that “whatever we may think about mind, most of us hold that our decisions have physical consequences – so a given decision affects a particular region of space-time.” (p.169) There would be some mental region R to which we could attribute properties that link to a past event Q or a future event P. Clark denies the objection to be valid, for examples in which sets Q and P hold up without R could be constructed.

After Cartesian dualism and ephiphenomenalism have been ruled out, there is only one further possibility: the re-examination of physical laws in the light of quantum theory. Because qualified Newtonian approach is invalidated by the EPR effect, then a quantum logic formulation in the footsteps of Mackey (1963) is the ground for the development of nonlocality applied to consciousness.

Given the results of the EPR effect, which strongly imply a global nonlocality, here is how Clark proposes we go on: “We do not start off assuming that the universe is composed of independent atoms. So global effects do not require special mechanisms to make them happen; rather, special mechanisms are required to break things down to the point where physics becomes local.” (p.172) Furthermore, if those decohering events which lay out local consequences could’t be observed from the outside, we must find out a way to formulate how they engender from within.

Clark links such nonlocal physical property with consciousness, which should be carried by brain processes but remain separated from them, as much as charge is carried by particles. And if the mind is fundamentally nonlocal, then also its structure should be so.

Such turnaround of physics should be matched by ‘putting mind first’: “We would be in a position to understand how it was that mind could actually do something in the cosmos […] by determining which decohering histories of questions [= collapses of the quantum wavefunction] are realised in the process of self-observation that is embodied in consciousness.” (p.174) Clarks claims that human free will as we define it pairs “the essence of quantum logic, where the range of possibilities is not fixed in advance.” (p.174)

Consciousness assumes therefore a precise framework, an emergence of the interplay of the non-local nature of mind and the physical, local property of matter.

 

Conscious Events as Orchestrated Space-Time Selections – by S. Hameroff and R. Penrose

Hameroff and Penrose build their theory around the concept that experiential phenomena are inseparable from the physical universe, in a way that is so profound that could be hardly detectible, except for the non-computability of  conscious processes (they hold that some conscious states could not be derived from previous ones by algorithmic processes). They engender such property to the undecidable nature of the wavefunction collapse. The self-reduction – not an external, randomly triggered one – of the wavefunction is essential to the rise of consciousness, under the special conditions that “Only large collections of particles acting coherently in a single macroscopic quantum state could possibly sustain isolation and support coherent superposition in a timeframe brief enough to be relevant to our consciousness.” (p.179)

Penrose challenges conventional Copenhagen interpretation, which relies on random probability, weighted according to laws that describe the evolution of a previous state into the next, to explain the wave collapse, by including an underlying, non-computational unknown as a more accurate description of the wave collapse. He supports a gravitational account of how wavefunctions may collapse when they reach a certain threshold – an objective explanation of the quantum state reduction (OR). 10)Penrose argues that gravity curvature of space-time hasn’t been considered by quantum physicists. The official claim is that gravitational forces at microlevels are so tiny that considering them wouldn’t make any difference. Penrose instead argues that even almost-undetectable differences may have large effects. He proposes a superposed state made of different space-time sets, each of them ascribable to the possible “places” that the particle assumes in different states of the superposition and subsequent space-time curvatures due to its exerted gravity force. Such superposition is unstable, and will therefore decay under precise laws into the observable geometry that we get at the quantum state reduction. Penrose acknowledges that there is no consensus upon how objective reductions happen, but he sees no plausible alternatives to his proposal. Authors conclude that “If, as some philosophers contend, experience is contained in space-time, OR [objective reduction] events are self-organising processes in that experiential medium, and a candidate for consciousness.” (p.184) As to say, that if experience is a feature of space-time, it may also have its particular OR events, and ‘a candidate for consciousness’. They individuate cytoskeleton neural microtubules as the appropriate place for coherent superposition and OR to occur within the human body.

This part becomes a little tricky to explain without the help of proper images. 11)Available here. Anyway, in a nutshell, microtubules were identified because they answer a set of preconditions, namely 1) high prevalence, 2) functional importance, 3) crystal-like structure, 4) ability to be isolated from external interaction/observation, 5) functionally coupled to quantum-level events, 6) appropriate for information processing, 7) cylindrical. In the proposed model, quantum coherence emerges in the tubules and operates until the superposition of its components (the tubulins) reach the critical threshold when they collapse. The resultant OR is a ‘time-irreversible process’ and corresponds to the experienced event of ‘now’.

Hameroff-Penrose Orch OR model
Source credits: http://cogprints.org/369/1/tics.html

 

Penrose and Hameroff explain the coherence of individual, separated ORs – what they call Orchestrated OR (Orch OR) – as a function of microtubule-associated proteins, which would prevent tubules from being completely isolated, and therefore allowing a sort of coordination between different “set probabilities for collapse outcomes.” (p.188) This is how authors describe the stream of consciousness – as a continuum of Orch OR events, which account for the coherency of conscious experience as we know.

“If experience is a quality of space-time, then Orch OR indeed begins to address the ‘hard problem’ of consciousness in a serious way,” Penrose and Hameroff conclude.

 

The Hard Problem: A Quantum Approach – by H. Stapp

Given that nor classical mechanics nor dualism are accountable for an explanation of the ‘hard problem’ of consciousness within pure physical boundaries, Stapp puts forward the classical Copenhangen interpretation of quantum mechanics as the needed framework to build within a new, physical understanding of the issue. He quotes Bohr to illustrate the essence of the Copenhagen interpretation in the following terms:

In our description of nature the purpose is not to disclose the real essence of phenomena but only to track down as far as possible relations between the multifold aspects of our experience. (Bohr, 1934, p.18 – quoted at p.198 of the book; my italics)

Thus, in Stapp’s view, experiences of observers are brought into physical theory. The set of mathematical rules object of physical studies should be considered as the description of our classical understanding of nature. The nature of the quantum wavefunction is the set of “probabilities for, or tendencies for, our perceptions to be various possible specified perceptions,” and “the experience of the observer becomes what the theory is about.” (pp.199-200) Since no physical description could be attached to the quantum state, as it mathematically represents a set of probabilities, what our physical laws are made for is to describe ‘classically describable’ perceptions.

Stapp provides us with a brief description of alternative ontological interpretations: Bohm’s model, Everett-many-worlds model, Heisenberg model and Wigner-von Neumann interpretation. Their aim is to get rid of the experiential component and coherently describe the universe. Stapp remarks that they are all dualistic in some sense, as they obey to two different, intertwined dynamical laws: that of the wave function, and that of its reduction.

Bohm’s model could be described as a particle that surfs on the wave. As the wave spreads out its many ‘branches’, each of them would have a particle on top which would describe what our experience of the wave will be. This many-particle scenario is the set of possible outcomes from which would emerge that one we are going to experience.  Bohm’s explanation allows to answer to question of why we come to know only one aspect of the wavefunction, as the wave itself is not really a physical object, rather a probability density function. The causal explanation here of why that ‘surfer’ becomes available to our experiences would be that of assigning appropriate statistical weights to the initial conditions of the wavefunction.

Stapp considers the Bohm’s model useful, but not enough parsimonious, as it has so many ‘branches’ that we’ll never get to know for real. He recurs to the Heisenberg model as a better alternative, which consists of ‘actual events’ – those we experience – and a set of ‘objective tendencies for those events to occur’. There our hypothetical ‘branches’ would be cut off once the ‘actual event’ takes place. The problem with Heisenberg’s model, again, is that we don’t know why it so happens.

Everett’s model has a different appeal altogether. From a simulation of the brain/mind complex as a quantum object, since Everett’s approach has no wave collapse events, the simultaneity of different ‘branches’ could describe the brain as just one entity – the evolving wave as a set of different brain activities, which could correspond to different psychological persons. The problem with this theory – besides being hardly verifiable – is that the wave function, as we have seen, does not describe any physical object, as it involves probabilities, and therefore necessary ‘or’ characteristics: either one branch exists, either some other one.

Stapp recurs to the Wigner-von Neumann interpretation, which suggests that wave reductions should occur in concomitance with conscious events. Wigner-von Neumann interpretations, as all the previous, are dualistic, as they have “a component that can be naturally identified as the quantum analogue of the matter of quantum mechanics, and a second aspect that is associated with choices from among the possible experiences.”(p.204, original italics) He then proceeds to analyse how a quantum description of the brain/mind would be linked to the present interpretation of the quantum model.

By running a simulation, the body/brain would evolve into a superposition of different possibilities, a set of randomly-generated ‘plans of action’ that the brain would implement for its survival; thus the wave reduction would pick one of those plans and allow it to be executed. In such a view, the collapse of the wave is “a natural consequence of the fact that wave function does not represent actuality itself, but rather […] merely ‘objective tendencies’ for the next actual event.” (p.206) According to Stapp, the wave collapse would then substantiate the psychological event as we know it.

Stapp supports his claim by stressing how such model is efficacious, namely it suits the widely-accepted hypothesis that consciousness arose as an evolutionary response to aid survival. Bohm and Everett models wouldn’t support this feature, as they are completely deterministic. The quantum local random generation of ‘templates of action’ is the basket from which is picked the discrete event that would determine and actualise the behaviour of the organism. Stapp goes even further as comparing the universe with a ‘giant mind’, for the underlying wavefunction should be considered as having a subjective nature in virtue of its probabilistic description. “The great and essential move of the Copenhagen interpretation was precisely to realise that although no classical aspect naturally pops out from the quantum physical reality, […] (certain of) our experiences are, in fact, classically describable, and hence the empirically observable classical aspect of nature can be brought consistently into physical theory by introducing our (classically describable) experiences, per se, directly into the theory as the very thing that the theory is about.” (p.208)

What about the causation part? How do these wave reductions occur, and why are they the way they are? Stapp provides the following explanation: by virtue of the Copenhagen interpretation, which sees superposition of states as a set of possible experiences, the brain would then be the ‘host’ of a selection process among those different experiences, and the actualisation would become self-defined. What is described from physicists as a mere random process would actually be a nonlocal one. Stapp therefore manages to describe conscious experiences as entirely engendered by physical theory.

Postulating William James’ definition of ‘thought is itself the thinker’ (James, 1890/1950, p.401), Stapp explains the stream stream of consciousness as a sequence of discrete psychological events, which are bound together with an enduring sense of self by virtue of a sort of ‘fringe’ that would surround every though, providing a stable background which should be identified as the feeling of self. What, then, about free will? Stapp shifts the focus from the multitude of neural events – which would be indescribably messy – to the organism as a whole, thus assigning the choice to it. Randomness would then be restricted to the set of ‘action templates’, produced by the operational physical constraints of the body. A deterministic evolution at a microscopical level would thus be punctuated by top-down organic choices. 12)Much what random variability is accounted for in Darwinian evolutionary theory.

This theory stands upon the necessary missing element of why the quantum wave collapses. Physical theory by itself cannot provide such an element, thus the only thing we know exists besides physical laws, namely experience, should account for it. Experience is therefore fundamental in such that realities are fundamentally experiential; its emergent, particular aspect is what we call consciousness. 13)To a physics-illiterate like me, the theory seemed coherent, almost convincing. Good science is to be especially skeptical when inclined to accept something, so I searched for some rebuts to the exposed theory. Two very good responses that made me strongly correct Stapp’s interpretation could be found at this blog post by philosopher and skeptic Massimo Pigliucci, and a much broader analysis of the issue by physicist Michael Nauenberg. Nauenberg further argues here that the very interpretation of the wavefunction as a non-physical object should determine that “there isn’t any mystery that its mathematical form must change abruptly after a measurement has been performed.” Furthermore, much of misunderstandings which arose after von Neumann’s work is that he simplistically considered the measurement apparatus as a superposition of two states, the “fired” or the “unfired” state. A correct approach would be that of characterising any macro-object such as Geiger apparatus as a recorder of atomic events, that by rules of thermodynamics (arrow of time) should be considered an irreversible process. Irreversible processes, of course, are everything there is about the collapse of superposition, and would have nothing to do with the presence of a conscious being. Nauenberg remarks that Wigner was in fact the only major physicist (Nobel prize) to support the role of consciousness in the collapse of the function.

 

Physics, Machines and the Hard Problem – D. Bilodeau

Bilodeau bases his work on the the fact that interpretations to physical findings among quantum mechanics are controversial, and that physics itself would therefore be not suitable for an ontological description of mind. He goes on arguing that the prevalent interpretation of quantum mechanics is conservative, for it does not fully acknowledge the real threat it has posed to an exclusive objective view of reality. In his words, “The ‘objective nature of reality’ […] is maintained by shifting everything we think of as objective physical fact […] over to the subjective side of the Cartesian split.” (p.220) 14)Not much is provided to explain such claim, besides clinging to an unsolved measurement problem – not quite unsolved, as Nauenberg previously showed – and a 1929 Bohr quote about the indeterminacy of the subject-object boundaries of perception, where “no sharp separation between subject and object can be maintained, since the perceiving subject also belongs to our mental content.” (quoted at p.219)

Bilodeau stems from such statement that the analytical habits we have developed have more to do with the workings of our minds, rather that with the nature of reality. In other words, the geometries physics is so devoted to would be nothing than a mind projection on reality. He focuses on the idea of dynamics to illustrate his point.

A descriptive account of the physical world would consist of an ‘historical’ description and a dynamical description. The former is the definition of an object relative to its location in space and time; and because space and time are “means of ordering our thoughts about experience,” (p.222) then historical descriptions would pertain to the realm of experience, and are determined by observation. The latter is the definition of an object as a system of abstracted, typical properties, which could be determined by deduction and which would correspond to observed features of experience, and consisted in a way that could be described and predicted by general physical laws. Both historical and dynamical descriptions would not be defined a priori.

Given that laws of physics pertain to the dynamical description – as they are seen by the author, as laws of action, not of being – the mechanical worldview would try to describe the whole universe in dynamical terms, “so that the typical and particular become equivalent and no aspect of reality is excluded from the abstract representation.” (p.223) The problem with this view according to Bilodeau is that the physical reality of the world could not be differentiated from its abstract description, if everything could exist as a mere mathematical structure. Therefore, the special geometry we identify with physical reality should be linked with our mental abilities, namely, that “it is the subjective which makes the objective physical.” (p.223) What physics describes, would therefore be an “empirical manifestation of a non-mechanical mode of existence.” (p.224)

Bilodeau remarks how the wavefunction shouldn’t be misunderstood for a physical object. He sees it as a set of ‘causal propensities’ of an empirical event, for it wouldn’t count as ‘reality’ in a proper sense. The difference between such empirical and abstract aspects of reality are carried out by recalling the ‘classical nature of the apparatus’ – by virtue of Bohr’s definition, and drawing a subject-objective duality in the following terms: “The apparatus is simply the experiment approached from the historical empirical point of view. The microsystem is the experiment approached from the abstract dynamical point of view. These are the two aspects of the same thing.” (p.226) 15)There is a mistake in the argument: a dual aspect of reality could not really stand simply because the claimed ‘objective’ part is not physical, as the wavefunction has no physical properties. This is strange, because Bilodeau himself didn’t miss to notice this feature of the superposition, although he claims it should be recalled in a dual-aspect theory of reality.

The author proposes to substitute physics as a basic description of reality with an even more basic, ‘that-which-is’ concept of reality; “not as a set of all particular things (events, objects, ideas, feelings, etc.), nor as a structure, but rather as simply the ultimate referent of all we say about any of those things.” (p.227) 16)This is the clarification you might have waited for when reading about the concocted underlying “non-mechanical mode of existence.”

So what is there to say about consciousness? First and foremost, machine and organisms should not be strictly compared in a ‘brain machine’ fashion, for the function of a machine is imposed by the designer, while the function of an organism arises from the inside; furthermore, organisms would have such unpredictable patterns and unlimited set of states that they could not be matched by anything mechanical. It follows that “just as we cannot abstract ‘cellness’ or ‘organicness’ from a cell and build it into a machine, neither can we abstract consciousness from a brain.” (p.230) Rather, consciousness should be an exclusive result of an organic process. Chalmers’ ‘hard problem’ of consciousness would be transcended by embracing a wider nonmechanical ontology. 17)By virtue of what has been previously exposed, I trust you may infer how weak such argument is. Nevertheless, being this a comparative collection of papers on consciousness, I wanted to include it as well.

 

Why Neuroscience May Be Able to Explain Consciousness – by F. Crick and C. Koch

Crick and Koch propose that we explain consciousness by locating those neural activities which are directly responsible for consciousness. They appeal to clinical examples as prosopagnosia to account for a basically neural explanation of subjective phenomena.

In a neuroscientific view, information is processed by neurons in a semihierarchical manner, whereas basic neuronal correlates contribute to performed actions by ‘send up’ the information to higher-processing structures, until it is defined in a property we recognise as motor-like, or visual-like, etc. They recall the famous experiment of Mary, the woman who studied everything about the red colour but never had an experience of red, to explain that Mary doesn’t know ‘what is is like’ to see the red colour precisely because she never had “an explicit neural representation of [the] colour in the brain, only of the words and ideas associated with [it].” (p.238)

This would be the main reason of why we cannot convey to others the exact nature of any subjective experience. The communication would inevitably be carried out by different neural paths (the motor part and subsequent verbalising part) than those which were directly involved in the subjective perception of the experience. So what’s available to inter-subjective verification and inquiry is not the very nature of the conscious experience, but only its relation to other ones.

Chalmers’ suggestion is that we approach the problem of why in the world we have experiences by introducing the double-aspect information theory. Crick and Koch suggest that we look for neural correlates responsible for meaning, namely how neurons that code some visual information, for example, are linked to others that are responsible for making sense out of such perception. “It would be useful to try to determine what features a neural network […] must have to generate meaning. It is possible that such exercises will suggest the neural basis of meaning. The hard problem of consciousness may then appear in an entirely new light.” (p.239)

 

Understanding Subjectivity: Global Workspace Theory and the Resurrection of the Observing Self – B. Baars

Baars starts from Chalmers’ endorsement of Global Workspace Theory – the fact that single conscious contents exert global consequences and are available to to unconscious and conscious systems – to draw a finer distinction than ‘easy-hard’ problems proposed by Chalmers’, namely the definition of subjectivity as consisting of an observing self of consciousness’ contents.

The author’s claim is that subjectivity has been ostracised in XX century, only to be recovered by Thomas Nagel under the “what it is like to be something” definition. Baars laments how such description – which he calls an ’empathy criterion’ of consciousness – is momentarily useless for scientific inquiry, with the result of keeping subjectivity out of a meaningful conversation about consciousness. He proposes we recover the traditional philosophical concept of subjectivity as ‘everything that has to do with a sense of self’, which has been extensively and fruitfully used among philosophical and psychological research, in order to move forward in defining what consciousness is.

  • Consciousness is an attribution of and accompanied by a subjective sense of self, for unconscious events have the strong characteristic of not being available to subjective experience.
  • There is an interpenetration of ‘easy’ and ‘hard’ issues pertaining to consciousness, which could be exemplified as follows: the purported ‘easy’ problem of discrimination – the ability of distinguishing yellow colour from red colour, for example – could be performed by non-conscious entities such as computers; but an empirical observation, at least for conscious beings, would tell us that fatigue or distraction would either impair or strongly reduce the performance of such ability.
  • Similarly, causal interactions could be identified between ‘hard’ and ‘easy’ aspects of consciousness, at least in conscious creatures. Trying to keep in mind a series of numbers while reading this paragraph would seriously affect your information-processing abilities, by virtue of limited working memory which characterises our brains.
  • Sense of self and contents of consciousness are independent: we can perfectly go through our days with a relatively stable sense of self and experiencing many phenomena, and also hear to a story and identify with its characters. 18)“In technical jargon, conscious contents and self may be orthogonal constructs, which always coexist but do not necessarily covary.” (p.245)
  • Consciousness creates access for self, as what characterises our ability to retrieve present or past conscious events is eminently a function of the ‘I’ we experience as a sense of self.
  • The sense of self could be rescued from self philosophical denial (as postulating an external observer would constitute no explanation, as Gilbert Ryle repeatedly pointed out) by embracing psychological and neurophysiological findings, which respectively characterise ‘self’ as a multitude of pattern recognisers and as self-systems which could be detected in our very brains, such as the so-called sensorimotor homunculus.

“The reader can consult his or her own experience to see whether […] conscious events are accompanied by a sense of subjectivity […] But is it real consciousness, with real subjectivity? What else would it be? A clever imitation? Nature is not in the habit of creating two mirror-image phenomena, one for real functioning, the other just for a private show. The ‘easy’ and ‘hard’ parts of mental functioning are merely who different aspects of the same thing.” (p.247, original italics)

 

The Elements of Consciousness and Their Neurodynamical Correlates – by B. MacLennan

MacLennan confronts us with the fact that standard reduction procedures within the scientific environment, by virtue of reducing objective features to further objective ones, is not equipped to give satisfactory solutions to the hard problem, which could be characterised as the relation between the subjective and the objective.

Although the investigation of consciousness has the epistemological limit of being observed through itself, MacLennan proposes that we identify some stable characteristics of consciousness to separate them from the changing content, by means of methods that could be publicly validated. Using phenomenological terms, we could say that everything we could experience belongs to the phenomenal world, a ‘structure of potential experiences.’ Phenomena, therefore, are the contents which appear in our consciousness, and constitute the basic blocks of knowledge – our set of data (given things, from Latin). MacLennan warns us not to fall pray of simplifications concerning phenomena, which are not simple as they could seem: they are a complex agglomerate of information which does not only pertain to the here-and-now, but also to the future in terms of expectations and to the past in form of memories. Various experiments have shown how percepts are influenced by what we expect them to be, and our mind fills in the gaps of various sensory data to constitute an experience of flow in presence of discrete percepts, for example. MacLennan identifies such process as “the continuity of subjective time.” (p.252)

Just as neurological functioning could be reduced in an objective-to-objective fashion, MacLennan proposes we reduce phenomena ‘subjective-to-subjective’ into protophenomena, a sort of ‘atom’ of consciousness. He maps protophenomena onto certain activity sites of the brain which are responsible for subjective perception, specifically neurons’ receptive fields. Reduction to receptive fields would vary from classical objective reduction because of their functional properties, namely their ability to receive inputs not only from sensory data but from more abstract ones, like interpretations and expectations. By virtue of such high correlation, we could say that all protophenomena depend on others, that sensory and non-sensory protophenomena are strictly related and imply a much more complex overall picture than the simple objective-to-objective reduction.

Where should these activity sites be located? MacLennan individuates them as synapses, but does not exclude that Hameroff’s microtubules could count as activity sites as well. Because protophenomena per se are not sufficient to produce macro-conscious events as we experience them, we need them to change in a coherent way, just as coherent patterns of atoms reveal themselves as physically detectible phenomena. 19)How do exactly protophenomena coherently produce phenomenal experience? “A population of protophenomena dependent on the same inout protophenomena has a Conditional Probability Density Field (CPDF) that is the product of the CPDFs of all the high-intensity input protophenomena, that is, of all the input protophenomena present in the current conscious state. The CPDFs of individual protophenomena can be quite broad, but in the joint response to the same input of a large number, the product can be very narrow, so that they define a phenomenal state quite precisely.” (p.255)  Furthermore, protophenomena should be considered theoretical entities, a useful way to proceed toward a fruitful understanding of consciousness that would need to be confirmed along the way.

How does the author link mere neurological connections and the experienced sense of meaning? Increased activity of a protophenomena would effect – either stimulate or inhibit – those which depend on it, thus constraining the set of possible conscious states and their evolution in time. Conscious states would somehow be necessarily defined as long as underlying neuronal connections remain unvaried.

An interesting overall nondeterministic relation among protophenomena is thus proposed: “Even nonsensory neurons depend on non-neural processes, such as he physiology of he brain, and the physical environment of the body. Although these effects can sometimes be treated as extra, hidden inputs to the synapses, they are often nonlinear and comparatively nonspecific in their effects, so it is usually better to treat them as phenomenologically undetermined alterations of the characteristic patterns of the affected protophenomena.” (p.258) And neuroplasticity affects protophenomena by altering their relations as a result of changing habits or learning, such as the generation of brand new synapses may well be related to the generation of new protophenomena as well. Such is the ‘flexible ontology’ of phenomenal world that the author refers to.

MacLennan brings to completion the paper with the following implications:

  • Consciousness is a matter of degree, by virtue of its relation with the complexity underlying neural correlates; in such a view, simpler organisms should have less definite perceptual experiences, due less coherent CPDFs.
  • If we would be able to duplicate the exact information-processing properties of synapses, we would create a conscious experience; therefore, AI will have protophenomena and be conscious.
  • Subjective experience of various investigated phenomena must be the way it is; an example is provided in terms of the impossibility of pitch inversions – sound perception realm – where pitch sensations are mapped neurologically, and could not produce the same experience for high and low ones.
  • Consciousness is a unitary emergent property of the relations among individual protophenomena, and its unity could be measured in principle by measuring the ‘tightness’ between those relations. Thus we can say that also consciousness’ unity is a matter of degree.
  • Unconscious mental processes could be variously interpreted as 1) protophenomena with a low-degree coherence, thus not emergent at a conscious level; 2) what we commonly perceive of the world may be not the only conscious protophenomena population: other coherent protophenomena emergencies could very well be conscious as well, but manifest themselves in different ways, like dreams, urges and the like; 3) that according to one Sherrington and Pribram hypothesis, unconscious mind might pertain to simple mechanisms like fire-unfire axons, providing the feature of be instinctive, less reflective.

MacLennan proposes that we adopt irreducibility in both phenomenological and neurological realms to further proceed along a fruitful investigation: “The present theory is dualistic in the sense that certain objects in certain situations (namely, activity sites in a functioning brain) have fundamental properties (protophenomena and their intensities), which are not reducible to physical properties. It is also dualistic in that the inherently private fact of experience is nor reducible to the phenomena experienced, which are all potentially public […] Nevertheless, it is a kind of monism in postulating one ‘stuff’, which happens to have two fundamental, mutually irreducible aspects (phenomenal and physical).” (p.265)

 

Consciousness, Information and Panpsychism – by W. Seager

Seager’s aim is to address the ‘generation problem’, the explanation of why and how experience should be generated by physical stuff in specific configurations, and to show how it is crucial and ultimately unavoidable, so that it would oblige us to find an even more radical explanation that that proposed by Chalmers.

But first, let Seager put forward the evidence of why we should reject any denial of the generation problem and thus proceed from its necessity. The most recognised debunker of the generation problem, at least in Seager’s account, is Daniel Dennett. Seager paraphrases Dennett’s arguments as follows: the existence of conscious experience for a bat could be proven by finding out more about his nervous system, and linking what within the system is responsible for behaviour modulations. Seager remarks that unconscious mechanisms modulate behaviour as well, so consciousness wouldn’t be a distinctive property of behaviour. He suggests that maybe what would modulate behaviour would be neural representations above a certain intensity so that they would be ‘conscious’; but then the generation problem would reckon in its full inexplicability.20)Of course, what I have just exposed here is the main argument Seager moves against Dennett. The argument thus reported is convincing, but I don’t know Dennett position well enough to avail that Seager was fair in his interpretation. Dennett states in the reported paper that “Whether people realise it or not, it is precisely the ‘remarkable functions associated with’ consciousness that drive them to wonder about how consciousness could possibly reside in a brain. In fact, if you carefully dissociate all these remarkable functions from consciousness – in your own, first-person case – there is nothing left for you to wonder about.” (p.35 original italics) 

Author’s perplexities with Chalmers’ theory lie in the fact that a proposed fundamental feature of consciousness could not also be dependent upon a functional description, proposed by Chalmers under the principle of ‘organisational invariance’; consciousness would lose its fundamentality. Furthermore, an explanatory exclusion is posed to those who want consciousness to characterise only some specific kinds of functional descriptions. Seager rejects radical emergentism – the position by which consciousness as the product of specific physical assemblages has causal powers that differ from the causal power of its constituents – as ‘not very attractive’.21)Seager offers no explanations of why such possibility should be considered unattractive. It seems the case that further argumentation is needed, considered how this would be the sole point to reject functionalism as a likely explanation of conscious phenomena. I really am astonished that nothing more has been said. Chalmers’ theory would have another weakness in that the isomorphic nature of information and conscious experience doesn’t add anything to solve the generation problem, as it does not explain how some information bearers would have experiences and others don’t.

Seager proposes we shift to a ‘more radical view of information’: information would be not only a causal process of bit-transferring, but it should be characterised also in a semantical way. His argumentation grounds on Quantum Mechanics.22)Red flag: Seager is a philosopher – although he specialised in philosophy of science – not a quantum physicist. I am not proposing that philosophers shouldn’t take Quantum Mechanics into their models, but alternative interpretations of QM than those held by the scientific consensus should be suspect, especially when put forward by non-specialists. Seager talks about the two-slit experiment and how it generates an interference pattern which has different characteristics from the combined probability of the two paths the particles could take, as we would expect. When the interference pattern disappears, it is because of a disturbance – the act of measuring – which is generally understood as unavoidable. Seager claims it is not: “there is no need to posit disturbance in order to explain the loss of the interference pattern; mere information about which path the particles take will suffice.” (p.275) Seager pulls in the thought-experiment of placing a perfect detector which would not alter the particle state, and nevertheless determine its path: the interference pattern would disappear, “despite having no effect on the state of the particles” (p.275)23)It is not clear how particles’ state could not be affected, considered that we can’t possibly know what their previous state was. If the superposition of particles’ state passing through the two slits gains a concomitant interference pattern picture, then the detector has not been put in place, since the interference pattern implies a wave-like behaviour, implying the impossibility of predicting where the particle has gone. If we place the detector and know exactly the position of the particle, the interference pattern would disappear, as we are now ‘seeing’ the particle-like nature of the phenomenon. Can we say that no effect on the particles’ state has been exerted? No, since we can’t possibly know what the state was like before the detector was placed. Seager proposes therefore that Quantum Theory devises detectors to be carriers of a kind of information which is not ‘bit capacity’, for it doesn’t change the particles’ state, but which is able to differentiate between (and produce) the existence or nonexistence of the interference pattern by virtue of semantic properties. “The natural interpretation of [the] basic two-slit experiment is that there is a noncausal, but information laden connection amongst the elements of a quantum system. And this connection is not a bit channel or any sort of causal process (which shows, once again, incidentally, that we are dealing here with a semantic sense of information). Here, perhaps, we find a new, nontrivial and highly significant sense in which information is truly a fundamental feature of the world (maybe the fundamental feature).” (p.276, original italics)

Here lie the foundations of what Seager rightly calls panpsychism. He identifies four major objections to panpsychism:

  1. The combination problem: how would units of experience merge into the complex phenomena we call consciousness? This suggests panpsychism has a generation problem of its own.
  2. The unconscious mentality problem: if every atom has a mental aspect, how could we tell which of them have conscious properties and which don’t?
  3. The completeness problem: if consciousness was a fundamental property of the universe, we would expect it to show some causal effects that differ from those which can understood in simple physical terms. How comes such observations aren’t available?
  4. The no sign problem: no evidence of a nonphysical dimension of nature has been produced.

Seager answers as follows:

 

  1. QM clearly shows in the two-slit experiment that the state superposition is not a half-half mixture of particles passing through the left and the right slit. It has a different property altogether. It is therefore not a mystery that mental units could combine in a way such that new emergent properties arise.24)Needless to say, maybe, but such explanation could very well account as an answer to the generation problem itself; no panpsychism would be needed at that point. Somehow, Seager fails to see this point when he asserts that “quantum coherence cannot solve the generation problem satisfactorily, but it might solve the combination problem.” (p.283) 
  2. [not directly addressed]
  3. “As a physical theory, QM asserts that there is no explanation of certain processes since these involve an entirely random ‘choice’ amongst alternative possibilities. The world’s behaviour does leave room for an additional fundamental feature with its own distinctive role.” (p.280)25)I think the inconsistency of Seager’s point has been previously well-illustrated: randomness of the wavefunction collapse does not imply any hidden variable, even less so conscious causality.
  4. If we accept that “there is no apparent sign of any gravitation between subatomic particles but since we take gravitation to be fundamental we are willing to accept that the gravitation force between to electrons really does exist”, we would then “expect that the effects of the ‘degree’ of consciousness associated with the elemental units of physical nature would be entirely undetectable.” (p.280)26)Seager is willing to propose an essentially useless explanation  – as it subtracts itself from falsifiability, how could it be useful? – to account for panpsychism.

Seager’s panpsychism has, by author’s words, no empirical consequences, for it is intended to be a ‘purely philosophical theory’. Nevertheless, he is not afraid to state that it naturally follows from the incompleteness of the current physical world-view, “as evidenced by the fact that physically identical systems can nonetheless act in different ways. The ‘hidden variable’ is not physical but a form of elementary consciousness”. (p.282)

 

Rethinking Nature: A Hard Problem within the Hard Problem – by G. Rosenberg

Rosenberg poses himself among ‘The Liberal Naturalists’, those who are wiling to occupy the middle ground between those who deny the difficulty of the hard problem (‘The Gung-Ho Reductionists’) and those who think that the problem is unsolvable (‘The New Mysterians’). What characterises them is that “they are willing to suppose the existence of fundamental properties and laws beyond the properties and laws invoked by physics.” (p.288)27)Can such people even call themselves as Naturalists? In fact, the Stanford Encyclopedia of Philosophy clarifies that Naturalism is not much of an informative term anymore. “For better or worse, “naturalism” is widely viewed as a positive term in philosophical circles—few active philosophers nowadays are happy to announce themselves as “non-naturalists”. This inevitably leads to a divergence in understanding the requirements of “naturalism”. Those philosophers with relatively weak naturalist commitments are inclined to understand “naturalism” in a unrestrictive way, in order not to disqualify themselves as “naturalists”, while those who uphold stronger naturalist doctrines are happy to set the bar for “naturalism” higher.” Source: http://plato.stanford.edu/entries/naturalism/  What they propose, is that we think of the unified collection of each individual’s qualia as a unique ‘qualitative field’, which should be treated as fundamental, and that is situated beyond the mind.

The problems with the cognitive (specifically, high-functionality cognition, which should be opposed to low-functionality cognition as non-cognition) explanation of consciousness could be summarised as follows:

  1. Complexity criterion: ‘If a system reaches level of complexity N, then a qualitative field must arise from and co-evolve with it‘. Rosenberg argues that such explanation is fallacious, for it relies on concepts – ‘system’ and ‘complexity’ that are too difficult to simplify in a way that would suffice to characterise them as fundamental laws.28)I think the flaws here are to pretend that 1) there is a fundamental law of consciousness, instead of providing a mere agglomerate of characteristics as definition and 2) that consciousness laws should have the same elegance and simplicity as atomic laws of nature. Consciousness is invariably not an elemental property of matter (see the critic to panpsychism provided earlier), and the parsimony constraint should convince us that everything we know about particles wouldn’t be altered by inserting a qualitative property to them; they would behave just as they do under the current merely physical description. A theory of ‘degradation of cognition’ would imply panpsychism, so we would have to arbitrarily set a cut-off between sufficiently complex systems in order to be conscious: paradoxically, a cut-off of just one neuron would make the difference between a conscious and an unconscious system, but we could almost impossibly detect any difference in their behaviour.29)This objection is somehow more difficult to reject. Consider though the poetic naturalistic approach (illustrated by Sean Carroll in The Big Picture and deepened in this blog post: beyond a purely physical, elemental level, we try to make sense of the world in a way that would be useful to us. We can therefore set an arbitrary cut-off between conscious and non-conscious systems because, as explained before, consciousness is nowhere to be found as a fundamental property of the universe, at least not as fundamental as quantum fields. It is up to us to characterise which systems would be sufficiently complex to be called ‘conscious’, and which don’t, just as we decide to call ‘flounder’ and ‘tuna’ two different types of fish. Of course, defining consciousness would be slightly more difficult than that, and so we can’t expect to define simple consciousness laws. Furthermore, it is not meaningful to say that we can just subtract one neuron to consider an organism not conscious: complex systems as by definition populated by hundred of thousands of neurons, so clearly taking them in unities would not count as a meaningful consciousness threshold.
  2. Functionality criterion: ‘If a system evidences paradigmatically cognitive capacities XYZ, then it will have an associated qualitative field co-evolving with it’. Rosenberg insists that “the kinds of laws we are looking for are on the same level as those governing gravitation, motion, and mass.” (p.293) So the problem here would be that of characterising cognition in a way that would be simple enough to meet Rosenberg’s constraints. Even a functional account of cognition would not suffice, because it would still need teleology to exist.30)This last point proves ever further than fundamental laws of consciousness that are comparable to fundamental physical laws is not what we should be looking for.
  3.  Biology criterion: ‘‘If a system reaches level of complexity N and is carbon based, then a qualitative field must arise from and co-evolve with it‘. In addition to the complexity and functionality problems described above, a biological constraint will further require us to define what exactly about biology is necessary for consciousness to arise.

Rosenberg identifies two major intuitions that stand in the way of making panpsychism palatable: “1) we have no evidence for qualitative fields outside of cognitive contexts, and 2) the mere supposition is incoherent since it requires experiences without experiencers.” (p.297) To the first intuition, Rosenberg responds that every explanation of consciousness that would point at experiences other than subjective ones would not be supported by evidence;31)this is, in my opinion, a further point not to look for a fundamental law of consciousness, as it is fundamentally subjective. we shouldn’t then have a pre-theoretical bias influencing us on defining which systems are conscious, but we should nevertheless include with empirical confidence those which are evidently conscious and be much more cautious to exclude anything, since we have less information about ‘what they would feel like’. To undermine the second intuition, Rosenberg suggests that we think about our ability to regard qualia as “experiential objects”; only the feeling of awareness would be irreducibly cognitive, for there is nothing about awareness that could be observed from the outside within my own conscious experience. He concludes by saying that qualia should nevertheless be considered as independent objects ‘out there’, that they are essentially dependent on minds; it does however prove us that we can conceive of qualia as something independent from the mind. Furthermore, when we attach qualitative experiences to non-cognitive systems, we are trying to fulfil an analogy such as “humans have conscious experiences and thermostats have experiences X”; therefore, “the difficulty of imagining qualitative fields that are not associated with minds comes from a shortcoming in our empathy, and not from a fundamental conceptual incoherence.” (p.300)32)I hasten to remind the reader that this is an evident pathetic fallacy.

 

Solutions to the Hard Problem of Consciousness – B. Libet

Libet endorses Chalmers investigation of the ‘hard problem’, in that it evidences how physical fundamental properties are susceptible to a priori inferences (that atoms have a certain mass, for example, and should be accepted as a brute fact); it would therefore equally reasonable to elevate consciousness to the same fundamental, irreducible standard.

Libet although has some problems with Chalmers’ ‘psychophysical principles’; he briefly states his concerns as follows:

  1. The principle of structural coherence would be flawed, because awareness has experimentally not been related with many of the ‘easy problem’ processes that Chalmers described. Libet describes awareness as an exquisite subjective phenomenon, much like consciousness is; the principle of structural coherence loses all its explanatory power.
  2. The principle of organisational invariance link consciousness with observable behaviour; in fact, Libet stresses that there are numerous examples of functional behaviour that are carried out without the subject being aware of it. “The distinguishing feature for a conscious experience is an introspective report by the individual who alone has access to the subjective experience.” (p.302)
  3. Chalmers’ double-aspect of information relies strongly on the principle of organisational invariance, in that it relates physical information to certain ‘phenomenal spaces’; such functional account would have the same flaws of the previous principle.

Libet has proposed (1994) a testable – and here the author stresses the merits of his hypothesis – theory of consciousness, which would explain the phenomenon non-reductively as a ‘conscious mental field’ (CMF) that would emerge from a particular set of neural activities.33)The designed experiment, it seems, was never carried out. Libet’s hypothesis sounds much as contemporary electromagnetic-field theories of consciousness.

 

Turning ‘the Hard Problem’ Upside Down and Sideways – by P.Hut and R. Shepard

First of all, authors say, let’s define what the ‘hard problem’ is not: it is not the problem of providing a scientific explanation of how matter, as the brains are, can produce intelligent behaviour, for we haven’t defined yet what limitations a complex physical system should have. What the ‘hard problem’ is, instead, is to understand how the ‘first-person’ subjective quality of experience, which is seemingly unphysical, could arise from matter, for any closer understanding of the brain’s functions hasn’t provided any clue to reduce such chasm.

Hut and Shepard stress that there are some serious problems with the current scientific approach to the problem of consciousness:

  • Although conscious experience could not adequately be described in purely physical terms, it is supposed to arise from complex systems like the brain, while some regions produce conscious experience and some don’t, even though these regions are physically undistinguishable; the ‘evidently nonphysical’ characterisation of consciousness could therefore hardly be explained.34)Two things: first of all, we cannot claim that we know what exactly is going on in the brain, as to say that two regions are physically identical. No serious neuroscientist has never said so, even now, almost 20 years after this book was printed. So claims about the exact nature of the brain back in the late Nineties seem at least naive, and that is even more alarming, considering that Shepard is a cognitive scientist. Even more worrisome, is assuming that conscious experience is ‘evidently nonphysical’ and not get in the trouble of defending such position, if not by saying that is ‘commonly accepted’.
  • It has not been defined yet an accepted criterion to decide whether a physical process is characterised by conscious experience or not from external observation. No fundamental property has been identified to characterise conscious systems from non-conscious systems, and even if we would be able to discover that the firing of a particular neuron was responsible for a conscious event, that would tell us nothing about why an indistinguishable physical event would not be accompanied by conscious experience.35)Authors here leverage the use of the word ‘indistinguishable’ to subtend that everything we know about conscious and non-conscious events leads to identical physical explanations; even though in this case it might be appropriate to say that two firing neurons are somehow identical physical processes, if we know nothing about the hundreds of their neural connections, we can’t attach any meaning to the firing of a single neuron.
  • “If nonphysical conscious experience is taken to have a causal influence back on the physical process from which it arose (psychophysical interactionism), how is this to be reconciled with the fundamental assumption of science that every physical state of a system is strictly determined by a preceding physical state of the system […]?” (p.307)36)In fact, this is quite more than assumption, and it should undoubtedly tell us that conscious experience is a physical process.

Authors propose that we turn the problem upside down, that we start to see everything as based in experience: atoms, molecules and fields are no subject to direct experience, they are pure abstractions; what we refer to as concepts and words, for example, are nothing but ‘meaningless arrangements of molecules’ and ‘constellations of qualia’ in the mind of scientists.37)These sets of molecules and qualia are so ‘meaningless’ and randomly put together that we base our very existence on them! I propose that authors could well define arrangements of molecules as meaningless, but then do not use these meaningless objects to set up a new theory. Rest in solipsism.

Thus the ‘hard problem’ could be softened: macro-objects of commonsense perception and micro-objects of scientific inquiry become equally useful elements to describe the world around us. The problem of existence of other minds posed by the initial solipsism is to be softened by viewing intersubjectivity as “expressing properties that are inherent in subjective conscious experience, but in addition are mutually agreed upon by different subjects.” (p.309) Still, the biggest mystery would be the existence of an objective physical world. “Everything we experience (whether ‘out there’ or ‘in here’) is, alike, a part of our experience.” (p.310) Spatial and temporal extensions withstand the same treatment, and are no more independently existing features of the universe.

What Hut and Shepard propose departs from traditional idealism: they do not deny that there may exist something behind the experienced phenomena, but any physical law should be considered nothing more than useful hypothesis, to the extent that they allow us to ‘predict the regularities of our experience’.

Here authors introduce the notion of turning the hard problem sideways: we should try to consider both physical reality and experience to provide a grounding for reality. Intersubjectivity would not be a mere ‘superposition of subjective and objective properties’ anymore. And the solipsistic approach is a good starting point to feel and call the thinking brain as ‘my own’ with sufficient confidence, and make me able to extend the study of consciousness through my consciousness, just as maths alone is sufficient to model maths. Maths and consciousness unequaled can be described self-reflexively. Moreover, we should grant reality a necessary structure for conscious events to occur; this aspect of reality would although be different from our classical notions of space and reality for the reasons described above.

If both upside down and sideways explanations are merged, we could come up with the hypothesis that matter and consciousness are both “emergent properties of underlying and more fundamental aspects of reality” (p.314), which we should call X for lack of better definitions. To explain their view, Hut and Shepard recur to the following physical analogy: if we should explain what time is to someone who has a form of selective amnesia, how would we do it? We could take a series of snapshots and point out how certain objects have moved around the space, while some haven’t, and describe the moving objects as having ‘more motion’ than still ones. The point is, we would not be able to explain time without relying on the very concept of time, as to illustrate, for example, why some snapshots were taken before others; furthermore the whole explanation would unroll in time.38)What if we proposed instead a thermodynamical explanation of time? The arrow of time defines time as the necessity for which the universe has overall larger entropy than the moment before. This is how we came to infer that something like the Big Bang could have taken place. Such explanation would need no concept of time, except for the observers to exist and be able to link present experiences to previous ones, even if only for 2 seconds: it would just be a matter of comparing different universe states, and consistently observing that the snapshot of the second moment has an higher overall entropy than the snapshot of the first moment. The thermodynamical explanation would clear some of the messiness around a ‘snapshot explanation’. Of course the concept of time would be experientially, implicitly involved. I cannot otherwise imagine how a thought-experiment itself could be conceived out of time. Lets unfold the analogy, in that consciousness would be the equivalent of motion in the previous example. Just as the amnesic man had to infer the underlying, fundamental property of time from the experience of motion, the physicist would be shown how to start from the presence of conscious experience to arrive to the underlying aspect of reality that can give rise to consciousness. X stands to consciousness as time stands to motion. X would be everything that we ‘sense’, everything that ‘makes sense’ to us, and everything we know has such nature of being meaningful, of ‘making sense’ to us. “Attempts to embed consciousness in space and time are doomed to failure, just as equivalent attempts to embed motion in space only. Yes, motion does take place in space, but it also partakes in time. Similarly, consciousness certainly takes place in space and time, but in addition seems to require an additional aspect of reality, namely X, in order for us to give a proper description of its relation with the world as described in physics.” (p.319)39)I have a major concern here, which is the proposed fundamental property of time: in thermodynamical terms, again, time is nothing more than the concept we attach to the experimental evidence that universes proceed irreversibly toward a state of increased entropy. Two snapshots from a isolated universe showing two different entropies would tell us which of them came before and which after, without the need of anyone observing the process. Time is something we infer from those spontaneously unfolding processes, and we can confidently say that they would unfold with or without us. “Nature does not know what you are looking at, and she behaves the way she is going to behave whether you bother to take down the data or not.” (Richard Feynman)  “However abstract our notions of atoms, quantum fields, or more exotic constructs may be, all of these notions are ultimately grounded in experience. As such, they cannot even be considered as candidates for whatever it might be, if anything, that could be considered to underlie conscious experience.” (p.321)

 

The Relation of Consciousness to the Material World – by M. Velmans

Velmans advances that, because consciousness could not be found “within any information processing ‘box’ within the brain” (p.326) and could therefore not be enclosed within a functional explanation, and because idealism has the appalling problem of denying the existence of an ‘outside’ world altogether, we should find a deeper level of explanation of conscious phenomena: that “consciousness and its correlated brain states may be thought of as dual aspects of a particular kind of ‘information’, which is in turn, a fundamental property of nature.” (pp.326-327)

Such property could be inferred from the fact that as conscious experiences are representational, also their neural correlates would likely be so: qualia are ‘about something’ and their physical correlates would encode the very information. So both conscious experience and neurons would express the same information in two different ways.

Consciousness further has these two specific characters: it is related to “a late-arising product of focal-attentive processing” (p.327) and to ‘information dissemination‘. The first property stems from the experimental evidence that we become conscious of decisions that we have already done, or from the fact that conscious experiences are the final stage of the entire analysis process that our brains carry through. The second derives from a Weiskrantz experiement (1974), where subjects who were hemifield blinded could nevertheless accurately guess under pressure the perception of certain physical stimulations, even though these stimuli information was not consciously available to them.

Thus the theory exposed as such is quite similar to what Chalmers has proposed. There are some problems though with the split between ‘awareness’ and ‘consciousness’, for many processes such as the ability to discriminate and categorise could be performed without ‘awareness’, just as we wouldn’t say that a computer is ‘aware’. Thus the partition proposed by Chalmers is not satisfactory.

With regard to the relation between consciousness and information, Velmans stresses that we should say that it holds true only for phenomenal aspects of information, for information processing is largely carried out in a nonconscious fashion, both by humans and machines. What would characterise information and phenomenal? Besides the already known possibilities – that it could be related to the ‘wetness’ of living organisms, to a specific threshold in neural representations to become conscious, to specific brain regions, or to a combination of the previous options – Velmans adds that information could very well always have a phenomenal aspect, until it is prevented from doing so – and the brain would perform this selection by its inhibitory powers; it would be now much more difficult to dismiss Chalmers’ hypothesis.

Velmans prospects a nonreductionist theory where the appearance of information depends on the perspective from which it is viewed, whether it is a first-person or a third-person one. He recalls Quantum Mechanics to delineate a sort of ‘psychological complementarity’ principle: “Both a (third-person) neural/physical and a (first-person) phenomenal description are required for a complete psychological understanding of subjects’ representations.” (p.334) This is just a useful analogy to facilitate comprehension though, and Velmans makes clear that the wave/particle manifestation has nothing to do with the mechanisms of consciousness: both waves and particles could in fact be detected from a third-person perspective, whereas a unified comprehension of consciousness could be reached only through the complementarity of what is accessible to both the observer and the experiencing subject.

 

Neurophenomenology: A Methodological Remedy for the Hard Problem – by F. Varela

francisco-varelas-consciousness-scholars-mapping

(I found this picture to be quite illustrative of what has been discussed since the beginning)

Varela’s take on consciousness is phenomenological: conscious experience is irreducible. He reports the words of Searle to highlight the problem: “The ontology of the mental is an irreducibly first-person ontology … There is, in short, no way for us to picture subjectivity as part of our world view because, so to speak, the subjectivity in question is the picture.” (p.342) Varela stresses that a re-descovery of the direct quality of first-person experience could well provide a new, rejuvenating ground to all branches of knowledge.

Varela lays down the fundamentals of the phenomenological reduction (PhR) approach:

  1. Attitude: reduction. This is the mindful ability to think about one own’s thinking, to suspend one’s beliefs in order to open “new possibilities within our habitual mind stream” (p.344) This disposition has to be systematic to produce meaningful results.
  2. Intimacy: intuition. Reduction should produce a less-loaded, more vivid baggage of experience. The opening up of new mental horizons unleashes intuition, and is therefore “the basis of the criteria of truth in phenomenological analysis, the nature of its evidence.” (p.345)
  3. Description: invariance. Intimacy is not the end of the process, otherwise we’ll be left with a bunch of ghosts. The intuitive evidence gets expressed and re-shaped through the iterating process of communication, which has to be thought more of an embodiment, rather than an encoding. What would be produced are invariant materialisations of consistent and inter-subjectively verified intuitions.
  4. Training: stability. The whole process would be fruitful only if systematically repeated, and encoded in a community of phenomenologically-oriented researchers.

The bracketing of quick inferences injects new variabilities in the process. Thus, the subject-object duality vanishes into a broad filed of phenomena (what Husserl called the ‘fundamental correlation’). The nature of such duality emerges therefore as a manifestation of conscious processes, and its structure investigation casts light on its inextricable links with others’ consciousness experiences. But, warns Varela, “the line of separation – between rigour and lack of it – is not to be drawn between first and third person accounts, but determined rather by whether there is clear methodological ground leading to a communal validation of shared knowledge.” (p.348)

In this view, what does neurophenomenology bring to the solution of the hard problem? Rather than seeking for ‘extra ingredients’ or merely functionalist explanations, it seeks to find bridges between two irreducible phenomenal domains – that of matter and subjective experience. PhR provides the framework to account both for subjectivity and objectivity through intersubjective rigorous verification of one’s intuitions.

Phenomenological mastery should be encouraged as cognitive science technologies allow us to indagate conscious experiences more subtly. Varela proposes a double constraint, both on empirical questions and first-person accounts, to validate any neurobiological advance: examination in the form of reduction, production of invariants and intersubjective verification should be backed up, informed and validated upon phenomenological accounts, and viceversa. This is the missing piece of functionalist explanations of consciousness: it alienates human life. Neurophenomenology allows us to put it back in place.40)I was a little surprised to find not much references on the exposed phenomenological approach in the (limited) literature I’ve been reading lately. Maybe it is assumed that suspension of judgement is already embedded in good critical-thinking, and that too much of ‘bracketing’ could impair any advancement. The approach that I found in the mentioned literature is predominantly pragmatic, so it might be that the burden of doubt is sometimes set aside in order to advance the most likely hypothesis, in line with bayesian reasoning. In the end, sometimes it is better to advance with little doubts and avoid the costs of drowning into the details, while maintaining a bayesian approach and be ready to change one’s mind in light of new evidence.

 

The Hard Problem: Closing the Empirical Gap – by J. Shear

Shear’s take on the ‘hard problem’ is that its solution will depend upon a deeper understanding of conscious phenomena by empirical research. He remarks that while our comprehension of the physical world has been exponentially refined, much less has been done to explain conscious phenomena, and that our discussions on the matter “often remain based on superficial, commonsensical perception, classification and understanding of the contents of our inner awareness.” (p.361) The systematic scientific knowledge has thus to be extended to the realm of the subject.

The major objection that Shear has to overcome is that posed by Searle: that consciousness is not reducible, and yet not observable, for, by observing one’s own subjectivity, “any observation that I might care to make is itself that which was supposed to be observed.” (p.364) If such view holds true, we would be in a position where introspection could not provide empirical data, thus making any scientific aim groundless.

Shear relies on theory of mind to rebut Searle’s impasse: according to some research41)Wellman, H.M. – The child’s theory of mind (1990) , the child’s ability to discriminate between what pertains to the mental, e.g. dreams, and to the physical springs from the reflective evaluation of the criteria by which classified phenomena are intersubjectively accessible. Thus, observation and subsequent reporting could perfectly be described as real and consistent whether it would turn out to be right or fallacious, of carrying an objective or subjective content; no more should we consider that consciousness is less knowable by means of introspection than the physical is by means of perception.

Shear reminds us, vaguely echoing Varela42)see the previous paper on Neurophenomenology., that doing science has to do with the establishment of a rigorous method for intersubjective validation in accordance to specific protocols, rather than depending on the alleged physical-nonphysical nature of its objects of investigation. Thus, Shear advances, “the independence of the observer that is paradigmatically relevant to scientific methodology, and thus science itself, is that of the truth of conclusions, rather than that of objects referred to.” (p.369, original italics) If science will continue to develop in this direction, a ‘science of consciousness’ would correlate individual reports of experiences with ‘objectively observable phenomena in accord with standard objective scientific protocols.’

Now Shear addresses the development of contemplative eastern practices as extremely useful, as their inquiry in the realm of conscious phenomena has both conduced to an extremely rich reporting, and more importantly to the independent observation of a primary, ‘pure’ nature of consciousness. What could such detection tell us? Some physicists (Bohm, Wigner) have speculated that the rising of conscious phenomena from such a pure state is qualitatively almost identical to what happens in Quantum Mechanics, where matter seems to emerge from wave-like fluctuations.43)The expert reader would notice how Wigner is notoriously the only physicist who claimed consciousness to play a causal role in QM; Bohm’s model on the other hand, surprisingly well-considered among contemporary physicists (see this survey – not much consensus seems to be found anyway: see for example how Sean Carroll comments another survey) has the problem of inserting ‘hidden variables’ to be consistent (found out more at http://www.preposterousuniverse.com/blog/2008/08/08/quantum-diavlog/).  Shear aims on this lines at closing the gap between materialist, idealist and nondualist resolutions of the ‘hard problem’, by pointing out how consciousness and matter should be considered as much more similar than what is ordinarily thought.

 

Moving Forward on the Problem of Consciousness – by D. Chalmers

In his response paper, Chalmers addresses each of the moved critiques and positive contributions in turn:

  • Deflationary Critiques:

Dennett’s and Churchland’s arguments fall under type-A materialism: that once we know everything about the functions performed by the brain, we are left with nothing else, and the very question of consciousness would therefore cease to exist. The problem with this position is that it seems to deny a manifest fact about what we know – the existence of conscious experience. Chalmers adds that such a strong position needs equally strong arguments to be supported, but none of them could be found in Dennett’s and Churchland’s dissertations: the analogies of vitalism and heat are fundamentally different in their functionalist explanations from that of consciousness, because the latter lies “at the centre of our epistemic universe, rather than at a distance.” (p.383) If life and thermodynamical qualities could, in the end, be reduced to mere structures and functions, there is something unique about consciousness: we know that there is something else other than structure and function. Explaining experience would therefore be something that lies outside of a functionalistic view of consciousness.

Chalmers argues that Dennett can eliminate consciousness because he aprioristically sets into a ‘third-person absolutism’; a first-person perspective would nevertheless leave the hard problem intact. Dennet’s request of ‘independent’ evidence for the existence of experience could not be provided, because it is not ‘postulated’ to explain other phenomena in turn; experience should therefore be taken as irreducible. The scientific character of Dennett’s and Churchland’s positions could therefore be rooted in ‘third-person absolutism’, nothing more than a philosophical claim; those who are impressed by a first-person phenomenology would equally settle in an irreducible enemy camp.

Type-B materialists recognise that there is a conceptually distinct phenomenon of consciousness; they anyway subsume such difference in higher-level systems. That is, explaining structure and function with more structures and functions. Clark and Hardcastle resolve the issue by transforming an a priori difference between consciousness and functional properties into an a posteriori identity by means of correlations; this “makes the identity an explanatory primitive fact about the world.” (p.388, original italics) The bruteness of explanatorily primitive facts regularly identifies them as fundamental laws – therefore, type-B materialism would inevitably fall into the position that Chalmers is arguing for. Type-B materialism cannot work if it is not able to turn the identification into an explanations – and that is precisely the pain for everyone involved in the ‘hard problem’.44)Chalmers assumes here that “once it is noted that there is no conceptually necessary link from physical facts to phenomenal facts, it is clear that the idea of a physically identical world without consciousness is internally consistent.” (p.390) This is what legitimates philosophical zombies as useful thought-experiments to argue in favor of a non-reductive explanation of consciousness. A very good response comes from Sean Carroll in this article: “Imagine a zombie stubbed its toe. It would cry out in pain, because that’s what a human would do, and zombies behave just like humans. When you stub your toe, certain electrochemical signals bounce around your connectome, and the exact same signals bounce around the zombie connectome. If you asked it why it cried out, it could say, “Because I stubbed my toe and it hurts.” When a human says something like that, we presume it’s telling the truth. But the zombie must be lying, because zombies have no mental states such as “experiencing pain.” Why do zombies lie all the time? […] The problem is that the notion of “inner mental states” isn’t one that merely goes along for the ride as we interact with the world. It has an important role to play in accounting for how people behave. In informal speech, we certainly imagine that our mental states influence our physical actions. I am happy, and therefore I am smiling. The idea that mental properties are both separate from physical properties, and yet have no influence on them whatsoever, is harder to consistently conceive of than it might first appear. According to poetic naturalism, philosophical zombies are simply inconceivable, because “consciousness” is a particular way of talking about the behavior of certain physical systems.”


  • Nonreductive Analyses:

Among those who believe that Chalmers might have overestimated the easiness of the ‘easy problems’, that ‘reportability’ and other functions could not be fully explained without consciousness (Lowe and Hodgson), Chalmers answers that his position on reportability, for example, should be interpreted as the mere presence of reports, excluding the fact that experience and thoughts are required for producing the report.

In addressing what Warner called the problem of incorrigibility, Chalmers points out that many beliefs about experience don’t have the characteristic of being incorrigible – they do not directly participate in defining our concept of experience; those which do so, would therefore be incorrigible and constitute the first-person epistemology of conscious experience.

To defend the cause of epiphenomenalism, prompted by accepting the causal closure of the physical domain, Chalmers evidences that our only evidence about the causal role of consciousness lies in our intuition that some conscious events are followed by certain physical events systematically. “But the epiphenomenalist can account for this evidence in a different way, by pointing to psychophysical laws, so our intuitions may not carry much weight here.” (p.401) “It is not obvious that consciousness must have a causal role.” (p.402)45)In refutation to this, see note n.44.

As for interactionist dualism and the possibility of denying physical systems’ causal closure, the quantum approach presented by Hodgson and Stapp show that they are not inconceivable. Chalmers goes on arguing in favour of a theory that can leave epiphenomenalism aside and preserve causal closure, while maintaining a nonreductive explanation of consciousness. How? Hawking himself (1988) has noted that there is no ‘fire’ under the equations that describe out physical reality – there is nothing that gives to it any substance. It would therefore not at all inconceivable to embrace what Bertrand Russell proposed in 192746)The Analysis of Matter, London: Kegan Paul. and say that we can “locate experience inside the causal network that physics describes, rather than outside it as a dangler; and we locate it in a role that one might argue urgently needed to be filled. And importantly, we do this without violating the causal closure of the physical. The causal network itself has the same shape as ever; we have just coloured in its nodes.” (p.405, original italics) 47)Chalmers appeals to the fact that if this idea was true, we could combine consciousness irreducibility and causal closure, while denying epiphenomenalism. I am not sure how this should suffice to shift our understanding of the world in such a radical way, as opposed to a more simple, bayesianly well-grounded reductive approach. Furthermore, this apparent ‘naturalistic dualism’ would be nothing more than a fundamental, causal monism – and that would simply tell us that physical reality goes far beyond what our physical theories are telling us.

Chalmers further specifies with concern to his psychophysical laws, that

  1. “To hold that two subjects in the same functional state have the same conscious state is not to sell out to functionalism, except in an attenuated sense. Consciousness is not reduced to a functional state; it is merely associated with one. […] The invariance principle is intended as a non-fundamental law.” (p.408)
  2. “The ontology underlying the informational picture remains open. […] I favour the informational view largely because when I look for regularities between experience and the physical processes that underlie it, the most striking correspondences all lie at the level of information structures. We have to find something in underlying physical processes to link experience to, and information seems a plausible and universal candidate.” (p.410, original italics)

  • Positive Proposals

Among the neuroscientific approaches, Baars has invoked that an ’empathy criterion’ is needed; not so for Chalmers, who remarks that solving the hard problem does not require us to have the exact experience of ‘what it is to be like a bat’, rather to explain why there is anything like that. Anyway, the process of linking consciousness and global availability, as well as searching for the fundamental principles of conscious phenomena as Crick and Koch do, will all be extremely useful and compatible both with scientific progress and the irreducibility of consciousness.

The phenomenological approach introduced by Varela and Shear is of a paramount importance, for it grounds the epistemology of the ‘hard problem’. The method is vulnerable to some flaws – such as the act of attention that subtly transforms the nature of experience and makes it therefore extremely hard to analyse, the development of an adequate formalism for gathering phenomenological data, not to mention the limits of incorrigibility – but they generally could be overcome by means of critical introspection and trust in the scientific methodology.

Whereas neurocognitive science provides the third-person data, phenomenology accounts for the first-person one. And as such correlations could be detected quite easily at a coarse-grained level, we may need new tools to find out how these two realms speak to each other in the deeper, finer-grained structures. Chalmers suggests that we speculate in the direction proposed by Penrose, Hameroff and Stapp – namely, that we consider how quantum mechanics could ground neural information-processing.

With regard to panpsychism, Chalmers reminds that it “is not required for a fundamental theory; it is not written in stone that fundamental properties have to be ubiquitous.” (p.417, original italics) What may be more of an accurate description is ‘panexperientialism, even in the form of the X fundamental introduced by Hut and Shepard. And in order to solve the ‘combination problem’, one does not need to assume that experiences should be assembled in the same way as physical particles do: Chalmers introduces the idea that informational composition may be a more appropriate way of achieving conscious macro-combinations.


Recap

  1. Once refused type-A materialism, we ought to look for a further phenomenon to explain consciousness;
  2. Once recognised that type-B materialism falls under unparalleled explanatorily primitive identities, the problem of taking consciousness as fundamental could not be avoided;
  3. There is a choice between holding onto the causal closure of the physical or not; quantum mechanics can allow us to break it open, but advantages are dubious;
  4. By choosing a causal closure instead, we are left with placing experience outside the physical network (epiphenomenalism) or inside it by virtue of ‘Russelian monism’. Chalmers favours the latter, with the provision that such panexperientialism could solve the ‘combination problem’;
  5. The most substantial choice would be that of the form of the proposed psychophysical theories: scientific approaches are favourite, but metaphysics should not be left out completely.

References   [ + ]

1. This supposition is based on the ground that mental things somehow don’t have spatial properties. It seems to me that the simple fact of not being able to produce two contemporary thoughts should tell us enough about the issue, since today’s scientific framework takes space and time to be radically unified.
2. In support of such claim, he posits that reductionists views in the relation between scientific fields should be rejected, as certain problems are not transferrable between them. Take for example, McGinn says, how “grotesque to claim that the problem of how the dinosaurs became extinct shows any inadequacy in the basic laws of physics!” (p.104). He couldn’t better predict the theme of last Lisa Randall’s bestseller!
3. This statement holds true for fundamental entities that have demonstrated, strict relationships with other fundamental entities, such as physical particles. However, consciousness doesn’t have this status, as there is nothing else that supports its supposed fundamental nature.
4. This understates recent advancements in physics toward the confirmation of gravitons, and contrasts with the spirit of science that would always look for even more fundamental underlying laws.
5. I see this last move as an alternative way of placing consciousness in the realm of fundamental entities, just as Chalmers does.
6. https://en.wikipedia.org/wiki/Pathetic_fallacy
7. https://en.wikipedia.org/wiki/Defeasible_reasoning; Cruz, Joseph (1999) Contemporary Theories of Knowledge (Studies in Epistemology and Cognitive Theory), p.36
8. A detailed explanation of what impairment is and what is not, as it is so crucial to the point, would be much needed, but the author only provided references of his 1986, 1989, 1992, 1993 and 1994 works.
9. I omitted to report how the author deals with the two major objections to his argument, namely that his account doesn’t ‘invoke the subjectivity of conscious experiences’ and that physical perceptions per se could not have conscious experiences. Robinson answers the first objection by firsthand designate consciousness as one among the properties of conscious experiences – remember the example of pain – then by rejecting relational theories [for X to be a conscious X is for it to stand in some relation, R, to some other thing, Y] and therefore characterising the consciousness of conscious experiences as intrinsic and essential, i.e. “that they cannot exist without being conscious.” (p.157) He the proceeds to answer the second objection by sectioning colour perception into the molecular properties of the colour (colourO), and the ‘conscious experience that goes together with the perception’ of colourO things by a subject S, named colourC(S). At that point, subjects would necessarily learn to label colourC(S) experiences to colourO objects by some causal neural paths, which would be named colourN(S). Robinson claims that only colourC(S) properties are those which are conscious.
10. Penrose argues that gravity curvature of space-time hasn’t been considered by quantum physicists. The official claim is that gravitational forces at microlevels are so tiny that considering them wouldn’t make any difference. Penrose instead argues that even almost-undetectable differences may have large effects. He proposes a superposed state made of different space-time sets, each of them ascribable to the possible “places” that the particle assumes in different states of the superposition and subsequent space-time curvatures due to its exerted gravity force. Such superposition is unstable, and will therefore decay under precise laws into the observable geometry that we get at the quantum state reduction. Penrose acknowledges that there is no consensus upon how objective reductions happen, but he sees no plausible alternatives to his proposal.
11. Available here.
12. Much what random variability is accounted for in Darwinian evolutionary theory.
13. To a physics-illiterate like me, the theory seemed coherent, almost convincing. Good science is to be especially skeptical when inclined to accept something, so I searched for some rebuts to the exposed theory. Two very good responses that made me strongly correct Stapp’s interpretation could be found at this blog post by philosopher and skeptic Massimo Pigliucci, and a much broader analysis of the issue by physicist Michael Nauenberg. Nauenberg further argues here that the very interpretation of the wavefunction as a non-physical object should determine that “there isn’t any mystery that its mathematical form must change abruptly after a measurement has been performed.” Furthermore, much of misunderstandings which arose after von Neumann’s work is that he simplistically considered the measurement apparatus as a superposition of two states, the “fired” or the “unfired” state. A correct approach would be that of characterising any macro-object such as Geiger apparatus as a recorder of atomic events, that by rules of thermodynamics (arrow of time) should be considered an irreversible process. Irreversible processes, of course, are everything there is about the collapse of superposition, and would have nothing to do with the presence of a conscious being. Nauenberg remarks that Wigner was in fact the only major physicist (Nobel prize) to support the role of consciousness in the collapse of the function.
14. Not much is provided to explain such claim, besides clinging to an unsolved measurement problem – not quite unsolved, as Nauenberg previously showed – and a 1929 Bohr quote about the indeterminacy of the subject-object boundaries of perception, where “no sharp separation between subject and object can be maintained, since the perceiving subject also belongs to our mental content.” (quoted at p.219)
15. There is a mistake in the argument: a dual aspect of reality could not really stand simply because the claimed ‘objective’ part is not physical, as the wavefunction has no physical properties. This is strange, because Bilodeau himself didn’t miss to notice this feature of the superposition, although he claims it should be recalled in a dual-aspect theory of reality.
16. This is the clarification you might have waited for when reading about the concocted underlying “non-mechanical mode of existence.”
17. By virtue of what has been previously exposed, I trust you may infer how weak such argument is. Nevertheless, being this a comparative collection of papers on consciousness, I wanted to include it as well.
18. “In technical jargon, conscious contents and self may be orthogonal constructs, which always coexist but do not necessarily covary.” (p.245)
19. How do exactly protophenomena coherently produce phenomenal experience? “A population of protophenomena dependent on the same inout protophenomena has a Conditional Probability Density Field (CPDF) that is the product of the CPDFs of all the high-intensity input protophenomena, that is, of all the input protophenomena present in the current conscious state. The CPDFs of individual protophenomena can be quite broad, but in the joint response to the same input of a large number, the product can be very narrow, so that they define a phenomenal state quite precisely.” (p.255)
20. Of course, what I have just exposed here is the main argument Seager moves against Dennett. The argument thus reported is convincing, but I don’t know Dennett position well enough to avail that Seager was fair in his interpretation. Dennett states in the reported paper that “Whether people realise it or not, it is precisely the ‘remarkable functions associated with’ consciousness that drive them to wonder about how consciousness could possibly reside in a brain. In fact, if you carefully dissociate all these remarkable functions from consciousness – in your own, first-person case – there is nothing left for you to wonder about.” (p.35 original italics) 
21. Seager offers no explanations of why such possibility should be considered unattractive. It seems the case that further argumentation is needed, considered how this would be the sole point to reject functionalism as a likely explanation of conscious phenomena. I really am astonished that nothing more has been said.
22. Red flag: Seager is a philosopher – although he specialised in philosophy of science – not a quantum physicist. I am not proposing that philosophers shouldn’t take Quantum Mechanics into their models, but alternative interpretations of QM than those held by the scientific consensus should be suspect, especially when put forward by non-specialists.
23. It is not clear how particles’ state could not be affected, considered that we can’t possibly know what their previous state was. If the superposition of particles’ state passing through the two slits gains a concomitant interference pattern picture, then the detector has not been put in place, since the interference pattern implies a wave-like behaviour, implying the impossibility of predicting where the particle has gone. If we place the detector and know exactly the position of the particle, the interference pattern would disappear, as we are now ‘seeing’ the particle-like nature of the phenomenon. Can we say that no effect on the particles’ state has been exerted? No, since we can’t possibly know what the state was like before the detector was placed.
24. Needless to say, maybe, but such explanation could very well account as an answer to the generation problem itself; no panpsychism would be needed at that point. Somehow, Seager fails to see this point when he asserts that “quantum coherence cannot solve the generation problem satisfactorily, but it might solve the combination problem.” (p.283) 
25. I think the inconsistency of Seager’s point has been previously well-illustrated: randomness of the wavefunction collapse does not imply any hidden variable, even less so conscious causality.
26. Seager is willing to propose an essentially useless explanation  – as it subtracts itself from falsifiability, how could it be useful? – to account for panpsychism.
27. Can such people even call themselves as Naturalists? In fact, the Stanford Encyclopedia of Philosophy clarifies that Naturalism is not much of an informative term anymore. “For better or worse, “naturalism” is widely viewed as a positive term in philosophical circles—few active philosophers nowadays are happy to announce themselves as “non-naturalists”. This inevitably leads to a divergence in understanding the requirements of “naturalism”. Those philosophers with relatively weak naturalist commitments are inclined to understand “naturalism” in a unrestrictive way, in order not to disqualify themselves as “naturalists”, while those who uphold stronger naturalist doctrines are happy to set the bar for “naturalism” higher.” Source: http://plato.stanford.edu/entries/naturalism/ 
28. I think the flaws here are to pretend that 1) there is a fundamental law of consciousness, instead of providing a mere agglomerate of characteristics as definition and 2) that consciousness laws should have the same elegance and simplicity as atomic laws of nature. Consciousness is invariably not an elemental property of matter (see the critic to panpsychism provided earlier), and the parsimony constraint should convince us that everything we know about particles wouldn’t be altered by inserting a qualitative property to them; they would behave just as they do under the current merely physical description.
29. This objection is somehow more difficult to reject. Consider though the poetic naturalistic approach (illustrated by Sean Carroll in The Big Picture and deepened in this blog post: beyond a purely physical, elemental level, we try to make sense of the world in a way that would be useful to us. We can therefore set an arbitrary cut-off between conscious and non-conscious systems because, as explained before, consciousness is nowhere to be found as a fundamental property of the universe, at least not as fundamental as quantum fields. It is up to us to characterise which systems would be sufficiently complex to be called ‘conscious’, and which don’t, just as we decide to call ‘flounder’ and ‘tuna’ two different types of fish. Of course, defining consciousness would be slightly more difficult than that, and so we can’t expect to define simple consciousness laws. Furthermore, it is not meaningful to say that we can just subtract one neuron to consider an organism not conscious: complex systems as by definition populated by hundred of thousands of neurons, so clearly taking them in unities would not count as a meaningful consciousness threshold.
30. This last point proves ever further than fundamental laws of consciousness that are comparable to fundamental physical laws is not what we should be looking for.
31. this is, in my opinion, a further point not to look for a fundamental law of consciousness, as it is fundamentally subjective.
32. I hasten to remind the reader that this is an evident pathetic fallacy.
33. The designed experiment, it seems, was never carried out. Libet’s hypothesis sounds much as contemporary electromagnetic-field theories of consciousness.
34. Two things: first of all, we cannot claim that we know what exactly is going on in the brain, as to say that two regions are physically identical. No serious neuroscientist has never said so, even now, almost 20 years after this book was printed. So claims about the exact nature of the brain back in the late Nineties seem at least naive, and that is even more alarming, considering that Shepard is a cognitive scientist. Even more worrisome, is assuming that conscious experience is ‘evidently nonphysical’ and not get in the trouble of defending such position, if not by saying that is ‘commonly accepted’.
35. Authors here leverage the use of the word ‘indistinguishable’ to subtend that everything we know about conscious and non-conscious events leads to identical physical explanations; even though in this case it might be appropriate to say that two firing neurons are somehow identical physical processes, if we know nothing about the hundreds of their neural connections, we can’t attach any meaning to the firing of a single neuron.
36. In fact, this is quite more than assumption, and it should undoubtedly tell us that conscious experience is a physical process.
37. These sets of molecules and qualia are so ‘meaningless’ and randomly put together that we base our very existence on them! I propose that authors could well define arrangements of molecules as meaningless, but then do not use these meaningless objects to set up a new theory. Rest in solipsism.
38. What if we proposed instead a thermodynamical explanation of time? The arrow of time defines time as the necessity for which the universe has overall larger entropy than the moment before. This is how we came to infer that something like the Big Bang could have taken place. Such explanation would need no concept of time, except for the observers to exist and be able to link present experiences to previous ones, even if only for 2 seconds: it would just be a matter of comparing different universe states, and consistently observing that the snapshot of the second moment has an higher overall entropy than the snapshot of the first moment. The thermodynamical explanation would clear some of the messiness around a ‘snapshot explanation’. Of course the concept of time would be experientially, implicitly involved. I cannot otherwise imagine how a thought-experiment itself could be conceived out of time.
39. I have a major concern here, which is the proposed fundamental property of time: in thermodynamical terms, again, time is nothing more than the concept we attach to the experimental evidence that universes proceed irreversibly toward a state of increased entropy. Two snapshots from a isolated universe showing two different entropies would tell us which of them came before and which after, without the need of anyone observing the process. Time is something we infer from those spontaneously unfolding processes, and we can confidently say that they would unfold with or without us. “Nature does not know what you are looking at, and she behaves the way she is going to behave whether you bother to take down the data or not.” (Richard Feynman) 
40. I was a little surprised to find not much references on the exposed phenomenological approach in the (limited) literature I’ve been reading lately. Maybe it is assumed that suspension of judgement is already embedded in good critical-thinking, and that too much of ‘bracketing’ could impair any advancement. The approach that I found in the mentioned literature is predominantly pragmatic, so it might be that the burden of doubt is sometimes set aside in order to advance the most likely hypothesis, in line with bayesian reasoning. In the end, sometimes it is better to advance with little doubts and avoid the costs of drowning into the details, while maintaining a bayesian approach and be ready to change one’s mind in light of new evidence.
41. Wellman, H.M. – The child’s theory of mind (1990)
42. see the previous paper on Neurophenomenology.
43. The expert reader would notice how Wigner is notoriously the only physicist who claimed consciousness to play a causal role in QM; Bohm’s model on the other hand, surprisingly well-considered among contemporary physicists (see this survey – not much consensus seems to be found anyway: see for example how Sean Carroll comments another survey) has the problem of inserting ‘hidden variables’ to be consistent (found out more at http://www.preposterousuniverse.com/blog/2008/08/08/quantum-diavlog/). 
44. Chalmers assumes here that “once it is noted that there is no conceptually necessary link from physical facts to phenomenal facts, it is clear that the idea of a physically identical world without consciousness is internally consistent.” (p.390) This is what legitimates philosophical zombies as useful thought-experiments to argue in favor of a non-reductive explanation of consciousness. A very good response comes from Sean Carroll in this article: “Imagine a zombie stubbed its toe. It would cry out in pain, because that’s what a human would do, and zombies behave just like humans. When you stub your toe, certain electrochemical signals bounce around your connectome, and the exact same signals bounce around the zombie connectome. If you asked it why it cried out, it could say, “Because I stubbed my toe and it hurts.” When a human says something like that, we presume it’s telling the truth. But the zombie must be lying, because zombies have no mental states such as “experiencing pain.” Why do zombies lie all the time? […] The problem is that the notion of “inner mental states” isn’t one that merely goes along for the ride as we interact with the world. It has an important role to play in accounting for how people behave. In informal speech, we certainly imagine that our mental states influence our physical actions. I am happy, and therefore I am smiling. The idea that mental properties are both separate from physical properties, and yet have no influence on them whatsoever, is harder to consistently conceive of than it might first appear. According to poetic naturalism, philosophical zombies are simply inconceivable, because “consciousness” is a particular way of talking about the behavior of certain physical systems.”
45. In refutation to this, see note n.44.
46. The Analysis of Matter, London: Kegan Paul.
47. Chalmers appeals to the fact that if this idea was true, we could combine consciousness irreducibility and causal closure, while denying epiphenomenalism. I am not sure how this should suffice to shift our understanding of the world in such a radical way, as opposed to a more simple, bayesianly well-grounded reductive approach.

The Bodhisattva’s Brain

The Bodhisattva's Brain - Buddhism Naturalised by Owen Flanagan philosophy ethics Buddhism Aristotelian ethics

ISBN:978-0262525206

READ: 2016-08-25

AUTHOR: Owen Flanagan, Professor Emeritus of Philosophy at Duke University


Is there any underlying, testable truth in the claim that Buddhism generates happy people? I lived with Buddhists for several months. They seemed happy. The point is, even patients who take homeopaths report to feel good, and that does not prove homeopaths to be effective. The power of positive belief has to be examined and weighted when tackling such complicate questions about ‘happiness’, even more if we acknowledge that happiness has many facets, and c is at too an early stage to be able to distinguish between different happiness feelings.

Self-deceiving myself and uncovering its mechanisms has ben largely instructive. This is why I judge extremely interesting that Buddhism itself regards false beliefs (moha, delusion) as morally wrong, an obstacle toward enlightenment. The Dalai Lama itself summoned Buddhist epistemology in those terms: “Buddhism accords greatest authority to experience, with reason second, and scripture last”. Such affirmation may give naturalists a legitimate hope to engage in a fruitful conversation with Buddhism. Considering the adaptive nature of religions, it is not crazy at all to sketch out how a Buddhism naturalised ought to be.

There is a caveat, though: karmic causation. Karmic causation can be understood as the set of causations produced by sentient beings, both intrapersonally and interpersonally. Problems with reconciling karmic causation and ordinary causation may emerge when Buddhists, as it seems from both Flanagan’s thorough explanation and widely-known Buddhist concepts, introduce the following theses:

  1. That the emergence of sentience was somehow planned in advance;
  2. That human consciousness is of a different ontological type than natural laws are.

Flanagan generously gives two possible exits from the impasse, a ‘tame’ and an ‘untame’ one: the tame interpretation is to consider the ‘law of karma, by which an intentional act will reap certain fruits’ as a subtype of ordinary causation, pertaining specifically to sentient beings, which would give the conceptual framework of moral sciences.

The less tame interpretation is to consider karmic causation as ontologically independent, the metaphysical force that frees the stream of consciousness from the body and produces future, morally-charged causes; doing so constitutes a soteriological theory of rebirth. It strengthens Buddhist ethics, providing a hint of ‘hidden causality’ behind the randomness of Darwinian evolution and related potential threats to the meaning of life; it cannot although be supported within a scientific framework.

Buddhism wants to keep open an ontological question of consciousness, although scientific evidence is overwhelmingly against such hypotheses.

So far, so good. To reach the point of what Buddhism can teach us about happiness, a thorough examination would point out how Buddhism is essentially eudaemonic.According to Flanagan, following Buddhist moral principles might, not necessarily, cause happiness, specifically a kind of happiness which is rightly pointed out as happiness[Buddha], to distinguish it from happiness of the [happy-happy/joy-joy/click-your heels] or [hedonistic] sort.

The most interesting claim from Buddhism is that its metaphysics would necessarily imply a set of moral values, and that by working around those values, the eudaeimon would reach content and happiness. The metaphysics revolves around a narrative of ontological impermanence, both of the natural world and the alleged ‘self’. Such Heraclitean universe would imply a non-strict concept of personal identity, much as Locke has asserted. How does such conception contribute to reduce suffering (dukkha, the impossibility of satisfying all desires)? Recognising impermanence of things could make me feel better about losing a loved one; embracing the impermanence of self may come in hand to let go of afflictions buy seeing that they don’t belong to me anymore, because I am now a somewhat different person that I was when I originated them. This is the Buddhist receipt, to be discovered through wisdom (scripture) and experience (meditation), for alleviating suffering.

Buddhism proves to be a useful therapeutic mean. But Flanagan rightly argues that there is no logical connection between gaining the wisdom of being a selfless person and being moral. Plato himself wasn’t able to explain why a man who managed to exit the cave should go back and rescue his peers, instead of rejoicing his conquered virtues. True, Buddhism is a deep psychology that aims at changing human behaviour from the very roots, it is a practical set of moral principles stemmed from metaphysical laws to overcome earthly suffering; its metaphysical foundations are nevertheless clueless in providing moral outcomes by logical arguments. Thinking of oneself as selfless could equally likely produce a selfish, take-all attitude. To act selflessly in a generous way, one would therefore be motivated to the degree that a precedent inclination to link ontological selflessness and unselfishness was in place.

Again, if a selfless ontology may at most provide some psychologically fertile ground for the flourishing of morality, happiness is not a logical consequence. Rooting Buddhist eudaemonia in wisdom, virtue and mindfulness  – none of which are exclusively normative – doesn’t imply happiness as a necessary outcome. A normative exclusionary clause is further needed: happiness is worthy and ‘true’ only if it is attained through moral principles, and not by taking a magic pill, for example.

Buddhism is very useful in prospecting a rich eudaemonistic theory, characterised by compassion and focused on individual flourishing, which would positively affect cascade-like interpersonal relationships. In such view, selflessness prospects flourishing only to those who embrace a nonindifferent attitude and choose to make life a worthwhile, fulfilling project.

There is more. Following the eudaemonistic path, and accepting that virtue is the sole source of happiness with the exclusionary clause previously mentioned, Buddhism is not the only available option. How can we tell which virtue is the right one? It is widely recognised that virtues are psychologically useful and therefore real, but are nevertheless inclinations, not independent things. Virtues are ecologically sensible, so it is important to recognise that virtues could be evaluated only if we somehow define a broad and general concept of ‘True Happiness’ and proceed to examine different virtue variables. Furthermore, anachronism and ethnocentrism are somehow unavoidable, so any presented conception would be biased in this way, and can never be addressed as from “the point of view of nowhere”.

Liberal commonsense morality is extremely cautious from deliberating any shared vision of what a good life should be. It is so much simpler and less demanding than Aristotelian ethics. Aristotelian ethics is equally eudaemonistic, and claims that empirically, virtue is a reliable cause of happiness. Such reasoning would be valid if we insert the previous exclusionary clause, and say that happiness would not be valid if it stems from false beliefs or shortcuts, with a further warning toward self-indulgent talk of morally chauvinistic nature.

Aristotle is more inclined toward justice, reason and right action, whereas Buddhism places paramount importance to compassion and loving-kindness. Both morals address the fellow-feeling aspects of human nature as necessary – which were now in turn recently availed by evolutionary research – with Buddhism being much more emphatic on feelings of compassion and magnanimity. Buddhism advances that jealous rage, for example, although efficacious in evolutionary terms, should nevertheless be tamed, for when expressed it influences one’s mental state in a stressful way, with possible counterproductive outcomes. As this last example illustrates, Buddhism could be considered as slightly more demanding than Aristotelian ethics, but still feasible.

Ethics cannot produce a single, undoubtable theorem for living a good life. It has never done so, and it probably never will. Nevertheless, any attempt at deconstructing different morals is fruitful, for it brings new elements to the ethical draft we are constantly called upon sketching. Buddhism as such does not extinguish the rationally thirsty of why, given the impermanence of everything, compassion should be preferable to hedonism. A cosmopolitan view of the matter doesn’t allow us to settle for any single traditional way of living a good life; it does however enjoy the process of looking into ancient wisdoms for useful advances in the project of human flourishing. The very fact that there is no true answer, is at least a clear hint of Buddhist style.

Consilience

Consilience - The Unity of Knowledge by Edward Wilson

 

ISBN: 978-0679768678

READ: 2016-08-24

AUTHOR: Edward O. Wilson

 


 

Consilience is one of the four cardinal principles of the scientific method: theories which prove to conform with established knowledge from other disciplines have shown greater profitability in prediction and experimental evidence than those which don’t. Others, by authors’ canons, are parsimony, generality, and predictiveness.

Wilson advocates the search for consilience between natural and human sciences, and centres his argument around the emerging field of genetics and culture – gene-culture coevolution – back in the end Nineties. At the time he writes, rift between those who stood for a genetic account of culture and postmodernists loomed large, but both biological and psychological discoveries led to the emergence of new fields, such as sociobiology and behavioural genetics. If such enterprise could be achieved, it would just help us understand the world more profoundly, and impose a sense of even greater order to the chaotic resemblance of the universe.

I could not understand how a theory of human behaviour could be approached without help from biology and the natural sciences in general. Although I believe this obstacle has been largely overcome already, the present work offers all the obstacles that have been lied down amidst the endeavour. Other allegedly considered sciences – such as economics – have a long way to include the whole biological equipment in it theories, and prove therefore to be more sound and empirically practical.

But the issue looms still large within ethics, and it resides in the rivalry between transcendentalists and empiricists. It signals that there still is much disagreement about where human nature should be traced, whereas transcendentalists fundamentally deny consilience by excluding epigenetic causes to human behaviour, and posits the existence of human laws as impermanent, and just to be discovered “as they truly are”.
The proposed transition from genes to ethics has been widely challenged by the ought-is objection, namely that what it is (epigenetic rules) cannot shape what ought to be (ethical behaviour), also named the naturalistic fallacy. Wilson argues that transcendentalists conversely posit an unlikely independent truth which would determine everything humans ought to do: which of the two alternatives is more likely? By empiricists’ explanation, ought simply derives from material processes.

Wilson doubts that transcendental positions, especially in their religious forms, would ever disappear, for they are part of the very genetic equipment that made humans fit until now. He manages however to draw precisely how that would need to interacts with empiricism to remain alive:

“Science faces in ethics and religion its most interest- ing and possibly humbling challenge, while religion must somehow find the way to incorporate the discoveries of science in order to retain credibility. Religion will possess strength to the extent that it codifies and puts into enduring, poetic form the highest values of humanity consistent with empirical knowledge.” (p.290)

What waits ahead of us is the possibility to “decommission natural selection, the force that made us”. Wilson predicts that we will embrace a genetic conservatism: we are not equipped to change anything beyond genes which are eventually responsible for diseases, for we would recognise any other intervention as driving us away from what is considered to be human.
How would we make use of our superpowers? Humans have become a force comparable to asteroids and tectonics: we are shaping the planet in unprecedented ways. The environmental issue posits the next big threat to our very existence, and it should be tackled by both those who believe in human boundless technological abilities which would allow him to live on Mars, and by those cautious environmentalists who see how complex Earth mechanisms are, and that reproducing them might not be so easy as depicted.

Being an acclaimed biologist, Wilson supports the conservative part, that “The only way to save the Creation with existing knowledge is to maintain it in natural ecosystems.” (p.324) Technology will be of great support to tame the demographical problem, but let’s not forget that each technological prothesis makes us, and the world which sustains us, more fragile.

The Big Picture

cover6-297

 

ISBN:978-0525954828

READ: 2016-08-05

AUTHOR: Sean Carroll

 


 

Pretending to draw a Big Picture could be seen as somehow pretentious. Still, Carroll accomplishes to put together a quite detailed state-of-art of what science can currently tell us about who we are.
Carroll defines his system of thought “poetic naturalism”: a picture of the universe that doesn’t need anything but physics to explain its functioning, and yet comprehensive of multiple levels of understanding as “useful ways of explaining how things work” for different domains.

The author shifts through basic cosmology, theory of evolution and thermodynamics, including the most modern views on the origins of life. He places the foundations of his work on quantum mechanics and the Core Theory, the most successful – until now – physical way of explaining how particles become the matter we see in our lives. Embracing the tool of Bayesian reasoning, the Occam’s Razor and the arrow of time, he proceeds from what physics laws tell us to rule out theism, re-examining the old cartesian dualism.

He then tackles the problem of consciousness and free will – again, poetic naturalism labels these phenomena real, as they are emergent properties of the basic quantum fields that make up everything we see. So “life” and “consciousness” are properties of a particular set of particles. We might never be able to calculate the exact quantum field of that set because of the inaccuracy of the data we would be able to gather, but the theory undoubtedly rules out any supernatural cause.

Given all that, the burdensome concern regards morality and meaning. How could we draw purpose from a universe that is purposeless? Is the very fact of us choosing and feeling and striving and desiring a mere illusion?
Carroll proposes a poetic view on the topic: “The universe doesn’t care about us, but we care about the universe. That’s what makes us special”. Desire becomes a useful way of describing the behaviour of a particular physical set called human being – and it is absolutely real. And if human beings are a special category of matter, then neither morality has any inherent, objective truth. Each of us has her own set of believes and values. This view goes under the name of moral constructivism. “The fact that morals are constructed doesn’t mean that they are arbitrary”, remarks Carroll – that being a strong objection against constructivism. Still, there are no ready-made answers. Morals is much of a conversational process, and progress is mostly made on the foundations of commonly shared values. Poetic naturalism gives no answers, because it is up to us to set the rules of the game.
I find the distinction between objective morality and constructivism fundamental, although resting on such an undefined ground is no less than scaring. We shouldn’t reject a theory because it doesn’t provide emotionally positive answers. Searching for the truth has rewarded us with enormous advantages, and there is no reason to doubt that it will be the same with morality as well.

The Mindful Geek

The Mindful Geek - by Micheal Taft

 

ISBN: 978-0692475386

READ: 2016/08/2016

AUTHOR: Michael W. Taft

 


 

Meditation has the intrinsic connotation of being an exquisitely subjective experience. How could a skeptic tackle it in a positive way, bringing it on the forefront of objective investigation?
Being a meditation practitioner myself, I intended to find any critical rebuttal of the practice. After getting to know some of the ultra-skeptics like Horgan and getting my head around meta-reviews – the tool that even ultra-skeptics are willing to embrace, for their overwhelming amount of data – I dwelled upon Taft’s work with renewed interest. Renewed, because my view on meditation resulted from some personal experience – which a healthy skeptic wouldn’t take into special account – and non-secular texts, with the exception of Krishnamurti. Having found out about the book on You Are Not So Smart was sufficient to give me a sound reason to look into it.

Taft offers an understanding of meditation inspired by computational language – i.e. conceptualising a technique as an algorithm, and meditation as a technology – and secular jargon, attaching a ton of scientific articles in support (don’t worry: some of them are meta-reviews, too).
He lists a long series of benefits, all supported by scientific evidence (although John Hopkins’ meta-review doesn’t show relevant evidence in support of stress reduction, or attention and sleep disorders benefits).
He clears the field from any misunderstanding by saying what meditation is not; specifically, it is not a practice to empty oneself from thoughts, rather a practice of close attention. What it is here described is a meditation practice generally known as mindfulness meditation.

Taft recommends to pay attention to three main ingredients for a good practice: concentration, sensory clarity, and acceptance. Whenever one of them is present, that would be a signal of a good practice. No space left for self-judgement whatsoever 🙂
Simple techniques – algorithms – are provided for relaxation, focus on body sensations, emotion awareness, positive intentions (yes, seriously) and open awareness.

Of course, none of this would give you an real glimpse of what meditation actually is. There is nothing like practice to get to it; but I can understand that some may need to clear their mental pathways first, since an informed, non-judgmental approach is pivotal to the success of the entire following practice – much like beginning to squat in the wrong manner would be harmful and hard to correct.
If you find yourself among those geeks who have been fascinated by why in the world Steve Jobs wanted to become a Zen monk – although he didn’t, and now we know how – then The Mindful Geek is an awesome place to start.