Man Of The World
Wednesday, 6 December 2006
D's Qualia from another angle
Topic: Mind

One of the things about Dennett is that he insists his theory is just a sketch. That might be important to remember because, where there seems to be unresolved issues that needs answers, perhaps he's just offering any more details leaving his words open to reading things in he didn't intend. I've jumped around in the second half of "..explained.." still looking for an explicit denial of what I'd call "minimal qualia" or, "the phenomena is in the mistake". Obviously, he denies "redness" and so on. But as Searle insists (and it's an obvious concern I'm sure many have), what is the mistake of thinking one is perceiving red? Can computers for instance, mistakenly think they are perceiving something red? Again, trashcaning the formal project of phenomenology, of a science that rigidly grounds objective perception, is far easier to accept than denying that two seconds ago I at least SEEMED to feel a pain in my arm. That's the point Searl attacks Dennett on, the obvious point to attack him on, but I don't see where he explictly makes this case. In favor of this case, from his words, the most important part of the book is in the first couple chapters where he's making the case that there's no central place in the mind where it all comes together in a unified stream. If this is true, then there is no single place for "redness" to come together, or even the "seeming" of redness to come together. BUT, he frequently  uses the language, "it only seems that way" when talking about colors and sensations. How does it "seem that way?" If there is a single narrative track at the top of the heap, can something "seem that way" to "it" even if we'd deny that there is "anything it is like to be" a person - who is at best the "center of gravity" of numerable narratives?

I haven't said anything new there, just yet another summary of what I've said a few times now. To add a few brush strokes to Dennett's incomplete picture, we have to talk about what consciousness is to Dennett - the positive case. There is in animals and computers the ability to calculate, to take inputs from the environment, process them, and respond. But this isn't consciousness. It might be thinking. But consciousness is "second order" thinking. Or, thinking about thinking. When you're driving, your brain is doing all kinds of calculations, taking in all kinds of visual information which isn't part of that narrow "stream" we like to think of as our conscious experience. And that would-be stream also just happens to be the supposed "what it is to be like" of other philospers. Qulia is then a subset (or perhaps the entirety) of this "what it is to be like." So that means, the positive case for what the "mistake" of qualia is, will be encompased by second order thinking. If we understand second order thinking, we'll understand the mistake we call qualia.

Now Dennett doesn't have a clear answer for what second order thinking is, but he's got a research project for trying to understand it. We can get almost to the end of Dennett's thesis by his considerations of blindsight. I've talked about that before as perhaps the archtypical case for phenomenal consciousness. A person with damage to visual centers in the brain for a certain sweep of visual input can't "see" anything but can report with surprising accuracy simple stimula present in that field. Dennett's star subject is a guy who can track the motion of a fast pinlight and mimick its path by hand gestures. When asked if he's conscious of the motion, he replies (something like), "of course I am, how else could I show you what it's doing?" So Dennett argues this special case of blindsight is different than the general case, and the difference turns on this second order awareness, which becomes the crude link between thinking and the "mistake" of qualia.

The last thing I'll probably have to report about this book is the section where he talks about the difference between this special case of blindsight and normal vision. This blindsight example is the foot in the door, but there has to be a fuller version to account for the most clear examples of would-be quale that we think we perceive. And I think at that point, there won't be much else to go on in reconciling "minimal qualia" with computation.

p.s.

I realize that last sentence could be misconstrued as reductionist which isn't Dennett's goal, he wants to eliminate qualia altogether. But there has to be some way of maping the discussion in the language on both sides of the fence and that's what I'm trying to do. When Ammon stood before King Lamoni, he inquired,

"believest thou in a Great Spirit?"

"yay."

"This Great Spirit is God!"

Even though Ammon believed God is a super powerful extraterrestrial, not a great spirit, he had to find a way to couch his thesis in the language of his audience. So even if we want to be eliminative towards "Great Spirit" or "Qualia", I think the language can be taken with a grain of salt for instructional purposes.


Posted by gadianton2 at 12:21 PM
Monday, 27 November 2006
Intents and Drafts
Topic: Mind

I braved a fairly long and boring stretch of Dennett's book over the last few days, things are picking up a little now. He's been exploring the "architecture" of the mind. This has him talking about virtual machines, neural nets, and schemes by which "drafts" rise to power. A lot of this stuff I've covered in other posts. The one thing I found interesting was his discussion of intentions and how that fits into multiple drafts. Since there is no center "where it all happens" there is no single entity standing in an intentional relation to something else.  He's attacking the (obviously oversimplified) idea that we always mean what we say. Sometimes we just say, and then later invent a story about how we meant it. There are two important things going on in these cases. In a Freudian slip, for instance, there is intuitively not a single producer, but (at least) two producers which blend their voices. And then he notes we are not always intending and then picking out the right words, but often pick out the words, er, "memonically", the "real intent" never to be found.

 


Posted by gadianton2 at 2:14 PM
Thursday, 16 November 2006
Memes?
Topic: Mind

Not having a lot of reading time lately, but I've got a little farther in Dennett's book. Now, I knew Dennett was a fan of Dawkin's "meme" theory (not many professional philosophers are btw), but I hadn't really considered before to what end. Sure, it would be tied to modularity, but he's explicit that memes are in fact what consciousness is. The mind is, as he says, a bunch of memes and the brain, a sufficient hardware platform to download them. Wild. And convenient. Of course, here we're talking about access consciousness, not phenomenal, which Dennett thinks either doesn't exist or is meaningless (as I've said, I can't pin him down on this one, despite what others say about him). So memes make up the kind of consciousness that solves problems, makes decisions, and so on.

More interesting to me was his argument for the brain as a machine.  It's similar to Fodor's but instead of focusing on language, he's talking computation. Fodor says that thinking resembles language (hence LOT) which then resembles formal logic which of course can be reduced to recursion, or computing. Dennett talks about Turing's project as an introspection into mind and the computing hypothesis as the end result. So to Dennett, computing follows directly when sitting down and plotting out what thinking is step by step. It's an interesting argument, I'll give him that. To put it another way, people normally think about computers as one thing and psychology or minds as another. Somewhere along the way, somone had the bright idea to use computers as a model of the mind, or to mimick what the mind can do. But Dennett is saying that the very idea of computing fell out naturally when Turing made his attempt to understand the mind. So the two things were closely related from the beginning.


Posted by gadianton2 at 12:41 PM
Wednesday, 8 November 2006
Multiple Drafts
Topic: Mind

I'm about 150 pages into Consciousness explained. It's a little harder to follow than I expected. I also have to say I'm enjoying it less than his online essays. He drags out the points he's trying to make to the extant that, if you have a bad memory like me, by the time you finish an argument you've got to go back to see what was at stake in the first place. I get the feeling this was calculated. He's trying to build up expectations of a gut-busting feast to come by relishing, with great arrogance, the appetizers. Well, some of that frustration is exaggerated by reading a one-time controversial book 15 years later. At any rate, the mundane parts of the book, as I can get through them, are well worth it. And by that I mean his history of cognitive science to that point. The experiments that he's drawing off of and explains in detail are fascinating enough in their own right and deserve a book for non-experts.

Dennett's belief is that within all branches of knowledge that feed into cognitive science from neuroscience to philosophy, the language of consciousness is permeated by what he calls "Cartesian materialism" even though virtually no one official adheres to such a doctrine. He argues that the standard working model of consciousness is like a filmstrip. The movie you watch on the weekend has the content that it does because it was filmed, and the places where it drifts from reality could either be because of something like, inserting propts into the movie before hand, or editing the rough product. If I've got Dennett right, then get rid of the studio, the director, and the stage crew, and let the reels and microphones run live. Why would we not just do that anyway? The problem raises its head in numorous counter-intuitive psych experiments. For instance, a device is put on the wrist, elbow, and upper arm. The device taps the wrist a few times, then the elbow, then the upper arm. The brain "interprets" (notice the guy in the studio sneaking in to watch, to do the "interpreting") this as taps moving up the arm and so the subject reports something like a rabbit hopping up his arm rather than taps localized at the wrist, and then some more localized at the elbow and so on. For Dennett, get rid of all the talk in the middle about the brain making calls. You have multiple input and calculation streams going on, and you have reports from the "subject". The editing room version of this story would take the first set of taps, wait, then get the second and third set of taps in, do some splicing and touch ups and then give a final presentation to the "subject" for reporting. But Dennett's point seems to be since there is no movie project, there is no one in the editing room to know in this case we've got to wait for the second and third set of reels. And how could that be known anyway? The brain doesn't know that two more set of taps are coming up. And the experiments show that if those don't follow, the subject feels the taps merely on his wrist as expected. Now, getting rid of the "Cartesian" language is hard to do and visualize what's going on and this makes following Dennett a little difficult because he keeps the heavy onslaught of verbiage coming. They key, I guess, is to get rid of any temptation to describe the experiments in terms of "I" felt this and then that, because the "I" already presupposes a single, pure stream of post-editing room consciousness running. So imagine two streams, say audio and video that sometimes run slightly out of sync, and that's all there is to the story. Don't imagine watching and listening and trying to put the two tracks together, because now you're innocently falling into the trap. And once you've gone down that road, you can't stop. You must imagine another guy in the head of the first guy in your head to worry about how the audio and visual inputs are being sorted out in his mind. There is no way to make the story first-person comprehensible.

Well, none of that on its face could possibly be that disputable. It's the implications Dennett supposidly draws. But like I've asked before, is he arguing for radical phenomenal skepticism or is he trash-caning phenomenology? Most people will come away from a basic modern philosophy class having come to believe grounding knowledge is impossible, but still believe they know things. I don't see why phenomenology would be any different. I think the same people who make it through philosophy 102 and still believe they know something can still believe, rationally, that they feel pain even if they come to disbelieve pain exists in real, discrete quantae that can be formally described. I'm not yet convinced Dennett is arguing the strong thesis. Every summary I've read of Dennett presents him this way, but I haven't found the smoking gun yet in his own words from my own reading of him. Dennett presents the common sense view most people hold to as typified in Descartes pineal gland, the mysterious unknown and very, very small part of the brain that connects disembodied mental with physical. I wonder though if that isn't on its surface just symptomatic of a deeper problem, and one that would encompass Dennett's view as well, Descartes introduction of representational knowledge. Anyway, that's for another post.

The last thing I want to mention is when I read Dennett, as I've posted before, I try to think of experiences I've actually had which have something to do with the points he makes. A strange thing used to happen to me frequently a few years ago (trying hard to embellish as little as possible). I'd wake up, open my eyes, and then fear I'd woke up right before my alarm goes off. My alarm isn't loud, but it's nerve grinding, and I prefer to wake up before it goes off so I can flick the switch and not hear it. In these cases, I wake up, open my eyes, fear, and then within a subjective second, have my fears confirm and the alarm goes off. After it happened a couple of times, I thought to myself, "How can I be so unlucky?" Or, how can my wake/sleep cycle be that precise? This was before I had read anything about cognitive science. After it happened a few times though I decided that the alarm was waking me up but somehow there was a delay in my perception of it. And that last sentence is loaded with the mistakes Dennett is talking about.


Posted by gadianton2 at 10:53 AM
Friday, 3 November 2006
Dennett and Pinching
Topic: Mind

Trying to keep pace on breadth, I'm about 100 pages into Dennett's "Consciousness Explained." I'm not big into long books and Dennett just drags on - his online essays are much better for me. He spends a lot of time talking about how our intuitions are often wrong, perhaps more wrong than right. Following his friend Richard Rorty, he wants to destabalize "indubitibility", specifically by showing how scientific third person accounts cast doubt on first person reports. The question though of course, and one I still have after reading his essays online, is how far he intends to take this. If qualia become meaningless, does that mean we really don't feel anything? There is a huge gap between a scientific phenomenology project like Husserl's and feeling anything - absolutely anything. A parallel problem people might be more familiar with is found in epistemology. No one has succeeded at solving to problem of knowledge. No one has succeeded at defining what science is. But does that mean there is no such thing as knowledge, do we become radical skeptics? Most people who reject the project of modern philosophy still believe they know things or believe that science works differently than religion. Is it any surprise that a formal phenomenology would also fail in a similar way to epistemology? And if it does, do we become radical phenomo-skeptics who quit believing we "feel pain"? More to the point, is that what Dennett is asking us to do?

This is important, because Searle, for instance, rejects Dennett by asking us to pinch ourselves and then demand, "now say there is no such thing as qualia!" Searle insists on what I'd call minimal qualia, that the phenomena is in the mistake. But there is a huge difference between that minimal thesis and phenomenology. There is certainly no guarantee either that if the minimal thesis is true, that phenomenology is possible. So by buying something akin to Meditation two it doesn't necessarily follow we can get to three, four and five. So the question is, does Searle (and others) create a strawman or is this really what Dennet thinks? Dennett hasn't tackled this issue head on from what I can tell.


Posted by gadianton2 at 8:35 AM
Final Final Searle..(for now)
Topic: Mind
One more thing I wanted to say about Searle is he got me thinking about a couple very important issues, I just didn't post much about them because I don't feel like I have enough of a handle on them (and limited time lately). One of those things involves the relation between causality and functionality, and apparently this is a big one for Searle given the guy buys multiple realizability, brain function as connectionist (and digital), and claims the brain is just a machine - but yet thinks functionalism is insanity. I touched on this in the last post but it needs a lot more space. I think this might be a case of therapeutic analysis, that is, to show why Searle is for all intents and purposes a functionalist - or even a computationalist.  A functionalist with perhaps the right externalist account that perhaps doesn't yet exist. David Chalmers and Ned Block have both provided some framing for Searle's problems that given Searle's refutations of Roger Penrose's anti-AI arguments, I have a hard time seeing how he can refuse. Chalmers for instance argues that if you get the computation tuned right, the causality falls into place. Anyway, that's all flagged for follow up - I can't shed any more light on the problem now.

Posted by gadianton2 at 8:33 AM
Wednesday, 1 November 2006
Final thoughts on "..mystery.."
Topic: Mind

The last couple things I wanted to say about Searle's "..mystery". First, revisting the zombie issue quickly, there's another direction to take supervenience physicalism that might make it more clear. Searle, in the last part of the book, once more offers the "logical possibility" that his Macintosh might all of a sudden become sentient. Though he isn't explicit at all on this point, I take him to mean that the laws of physics could all of a sudden change in such a random way as to accomodate the possibility, consistent with his thinking in the book earlier. Otherwise, if such a possibility existed absent a change in any physical laws, then mental doesn't supervene on physical and physicalism is false.

More importantly, as a whole, how different is Searle than his functionalist, physicalist rivals? Unlike some people seem to think, Searle believes a machine can be conscious. He thinks brains are machines. In fact, he even appears to either think the brain is digital (neuron connections) or that it very likely is - see his refutation of Penrose in this same book. He even, like Dennett, uses the artificial heart as an example of a machine that takes the place of a human organ. So in the end, his biggest issue seems to be that the way functionalists and computationalists are trying to understand the machine is wrong. Functional relations (for reasons I'm going to skip for now) appears to be a meaningless idea to him from the outset. And computation can't capture intentions (semantics - see chinese room argument outlined in about 5 billion places on the net). What Searle does seem to think is that as we begin to get answers to the problems of consciousness, the questions we think are insurmountable now will probably seem sensless then, understanding will reach a whole new level. This is strangely close to what elimitivists think. But, anyhow, how do we begin to get answers? While Searle is all for AI, absent the strong implementational assumptions, what's most intriguing to him gets a single example in this book. He brings up the medical phenomena of blindsight. A certain part of the visual field is damaged and the subject swears s/he can't see anything. But some experiments show that if X's and O's are displayed in this field, and the subject is asked to just take a guess as to what's there, the subject is usually right. So Searle seems to reason, having a thorough understanding of something like blindsight is a first step in the right direction.


Posted by gadianton2 at 2:13 PM
Wednesday, 25 October 2006
Botched Zombies?
Topic: Mind

Last night I read through Searle's critique of Chalmers in "...Mystery..".  Searle either misrepresented Chalmers pretty bad or just refuses to go into detail so some of his refutations weren't very convincing. Searle is the master of common sense examples and that makes him fun to read. But sometimes you get the feeling he's being sloppy. That could be because he really is being sloppy or he just thinks the subject matter doesn't deserve serious attention.

Searle rejects Chalmers' Zombie argument by basically saying he can imagine a world where the laws of nature are different, yet the microstructure the same, and so pigs fly. That being the case, we can conclude flight isn't physical. This seems obviously wrong. Chalmers' response printed in the book is essentially that in Searle's world, we'd notice a difference, a lot of matter in the air that previously wasn't e.g., pigs flying. What he means by this is that Searle didn't take the thought experiment far enough. The imaginable world for the zombie argument isn't just an alternate reality of this one where the furniture has merely been rearranged. Chalmers means a molecule by molecule exact replica of this world. Me sitting at this very computer typing this very sentence and so on. Searle's response to that was more or less a repetition of his initial rebuttal, but emphasizing everything could be the same physically but the governing laws different. Searle's angle doesn't work even though there's a hint of something I do agree with that I'll explain later.

The exchange was interesting to me because it demonstrates there's even confusion among masters when they're arguing something as subtle as supervenience physicalism. Chalmers' description, which paints two pictures, one like ours exactly and one like ours but with excess matter in the sky brings to mind David Lewis' articulation of physicalism via dot matrix printers. The two worlds are nothing more than the arrangement of the dots. From SEP, the resulting picture created by the dots supervenes exactly on the placement of the dots and likewise everything in the world supervenes on the physical. But pressing this analogy might buy Chalmers more than he can really afford. True, Chalmers didn't invoke Lewis' argument specifically, I am, but his response was basically a three-dimensional version of it. Searle was wrong, according to him, because in a 3-d mapping of the world, there would be more matter in Searle's counterfactual sky than in our sky and the thesis requires one-to-one physical identity. Now this creates for me an unknown in Chalmers' thinking and I'll certainly need to follow up on it. It very well could be that this response to Searle is all that he felt was warranted to expose the most obvious of Searle's mistakes. Maybe Chalmers would have more to say given more space. But this mistake on Searle's part wasn't his most important mistake, and failure to take that up makes it confusing for us lay readers.

Supervenience physicalism could be articulated in dot-matrix style by saying every piece of matter the same accross worlds, that would probably be right. But it doesn't make explicit the fact that <i>all physical laws</i> would also be the same. The analogy is just that, the dots on the paper don't just represent dots of matter, but the laws of physics as well. So Searle imagining a world where the laws of physics violate the laws of physics is a contradiction.

Chalmers for sure has a tactical advantage in going for zombies rather than pigs flying because it's easy to see a pig fly, but ever since Leibniz first posed it, seeing consciousness has been THE problem for philsophers of mind. Alan Turing gives us the best case scenario for intuitively grasping something like consciousness by bringing the problem exactly to our personal everyday level of interaction. Leibniz crushes this intuition by blowing up the brain so big you could walk through it, and then demanding, "Let's see you find the thought now?" Chalmers goes in the other direction to promote the same thesis, take a birds eye view of the whole universe and the personalities just don't shine through. I'm not saying Chalmers is sloppy, he's one of the clearest I've read, I'm just saying the presentations in these thought experiments sometimes cloud our judgement of the actual content.

And this is where I come back to my hint of sympathy with Searle. If you're selling molecule by molecule duplicates of the entire universe, then our focus is taken off the "laws" of those molecules as a matter of presentation even though the content doesn't hinge explicitly on that. Our imaginations are primed for easier agreement. But in the end, Chalmers no doubt believes his world of zombies is the exact physical description, including both molecules and laws, as our own. positing pigs flying just isn't the same.


Posted by gadianton2 at 7:49 AM
Monday, 23 October 2006
Searle on Dennett's Qualia
Now Playing: TBE
Topic: Mind

I'm reading Searle's Problems of Consciousness right now. I skipped ahead last night to his fight with Dennett since I've discussed Dennett on a number of occasions. It's Dennett's view on qualia that is of particular interest to me because it's so unusual. But I just can't take his advice all the way. My rejection of Dennett is more or less the same as Searle's, based on Cartesion certainty. The problem is, I'm not sure I buy into Cartesian certainty.

Descartes' "I think therefore I am" sets out to establish the thinking self. Searle (like I'm tempted to) has sort of reworked Descartes' epistemological statement of certainty into a phenomenal one, "I feel therefore I am." It doesn't matter how mistaken those feelings are as the mistake itself captures the phenomenal.


Posted by gadianton2 at 1:09 PM
Thursday, 19 October 2006
Comparing Pinker, Dennett, Fodor, and Churchland
Topic: Mind

I want to compare some of the ideas of four naturalists/athiests within the philosophy of mind or cognitive science who have also written for the lay audience, Daniel Dennett, Stephen Pinker, Jerry Fodor, and Paul Churchland. A lot of what I said about LOTH and connectionism in the last two posts lay the foundation for this one. Granted, there are no stereotypes and certainly there are subtle distinctions here I'm going to miss, this is going to be pretty darn broad sweeping. Let's start with breaking down the mind into it's two commonly attested components of intentions and qualia.

Dennett is an eliminativist on qualia, as I discussed a few posts ago. Pinker and Fodor believe qualia exist but don't have much to say about it other than it's a mystery. Churchland is a little more specific in that he argues against thought experiments which move from qualia to dualism. Fodor, however, has developed qualia arguments against functionalism (not physicalism).

Dennett and Pinker are both disciples of Fodor's early work on LOTH and the computational model of mind. Pinker and Dennett could probably be regarded as functionalists of some sort while Fodor sees the computational model as breaking down for higher-level cognitive processes. Churchland is an explicit eliminativist towards intentions. Dennett's position is nuanced because his "multiple drafts" theory renders intentions and hence, any computations which compose intentions, merely as an "intentional stance." There isn't an explicit "I" who has intentions, but if you take the sum total of all the various brain processes running together, you get a "center of gravity" that can be studied on an intentional level to make useful descriptions, descriptions that are more meaningful than purely physical ones would be. So Dennett is teased into Churchlands direction but stops short.

On a hardware level, Pinker and Dennett both appear to accept connectionism, or implementationalism since they run the computational version of mind - a serial processor - on top of that hardware. Churchland is a radical connectionist, we can scrap folk psychology altogether, symbol processors and all that, and work on a completely new model of thinking. Fodor is strongly opposed to neural net models though I can't figure out if he'd reject the possibility on a physical level. I think he might because of the intrinsically holistic nature of PDP's but that's a discussion I'm not ready for as there seems to be a lot of uncertainty on what different contributors to the field view as holism, and why.

Dennett and Pinker are pioneers in darwinian psychology which builds on Fodor's work on modularity. Fodor follows Chomsky's nativism thesis which argues that some abilities, such as learning a langauge, are innate. These innate abilities, along with peripheral functions like sight and hearing are inborn. While Fodor doesn't attempt a neurological account of what makes for a module, we have a pretty good analogy I think with computer circuitry. Specifically, with specialized circuitry that we'd now call ASICs. These modules are more or less hardcoded to perform specific functions. Think of the computer chip on your video card doing the work as a function of circuit design as opposed to the CPU on your computer doing the same manipulations virtually, following a software program. It's easy to imagine this kind of specialized function when it comes to sight or hearing but becomes more abstract when positing language, and things drastically beyond langauge. Fodor never believed modularity (and computation) can explain too much beyond peripheral functions and language. Pinker and Dennett believe modules explain just about everything. They opt for "massive modularity", that just about all we do in life arises from some kind of modular process that won out in a Darwinian battle. Fodor has been very critical of Darwinian psychology and this apparently has made Dennett to view him somewhat as an enemy. The lines of battle in my view are largely drawn by the criteria for what can be considered modular. The brain is very complex and perhaps Fodor's criteria are too narrow. But relax those assumptions too much and everything imaginable might be included by virtue of having nothing more than a broad definition. Dedicated circuitry for langauge? Maybe. But for stuff like reading fiction novels? There are just too many easy darwinian stories we can make up and too few computer models which would exceed the domain specific, encapsulated ASIC-like structure of Fodor to secure them.

Finally, it might be interesting to note that Fodor, Pinker, and Dennett are all rationalists as opposed to empericists. I like to expose little facts like this to a skeptic oriented audience because the perception of many skeptics is that empericism rules the day in every way. In fact, the further one goes in the direction of Pinker and Dennett, the deeper the committment to rationalism. Of course, ontologically, these three are  physicalists. Churchland would lean, as connectionists do, away from nativism (and rationalism). He's a functionalist, but of course his model of causal relations is connectionist rather than traditional computational.

 


Posted by gadianton2 at 12:59 PM

Newer | Latest | Older

« December 2006 »
S M T W T F S
1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31
XML/RSS