Man Of The World
Tuesday, 17 October 2006
Connectionism
Topic: Mind
In Cognitive Science, the main competitor to theories based on LOTH is connectionism. Connectionism tries to come up with a more biological-like hardware model for how the brain functions. The brain is just a mass of interconnected neurons, so maybe there is a way to model this in a more life-like way on a physical level. The computer science that later has become the study of "neural nets" is called parallel distributed processing. For a great introduction, see here. Essentially, you have we'll call, a lower level layer of neurons that bring in physical data, or inputs. Then a series of internal neurons that decide what to do, e.g., "fire", depending on whether a certain predefined criteria is met. Rather than being programmed by a formal language, nets must be "trained" and therefore any rules they follow are implicit in that they are distributed accross the entire network. Neural nets are therefore innately holistic. Training a net would consist of something like, feeding a net a picture of a face and now try to get it to pick out similar pictures. This kind of pattern recognition is something neural nets do very well, better than conventional computers. It would seem, that such recognition is more lifelike - how the eyes and brain would actually function. Another advantage of nets that make them seem more like a real brain is the way they naturally degrade as nodes are removed wheras, cut into the silicon on a conventional computer, and the whole thing falls apart.

But nets don't do everyting better. In fact, the key failure of nets is the attraction of LOTH and the classical model, nets don't model higher cognitive functions very well. This is what Fodor calls the systemics problem. A real mind it appears learns the formulas for things like sentences, it gets a system down, so to speak. Recall from my last post I said that LOTH is brilliant because of the insight that representation would have to go beyond just pictures, pictures alone aren't complex enough to model thinking. A net then, would naturally be very good at coming up with the picture part of a representation but not good at coming up with the sentential component. And at this point is where the controversy comes in.

A neural net can actually be trained to do sentential representation LOTH style. Maybe it's harder than in conventional computing, but hey, maybe in the future we'll figure out ways to do it better. In this case, however, a net isn't much of a departure from the classic model, rather it's a way to implement the classical model on a different hardware scheme, one that seems more brainlike on a physical level. Those who take this road are called implementationalists. The interesting school of connectionism though would be radical connectionism. They would say forget about LOTH. But under that option, what then, could be the alternative for explaining intentions, which seem naturally, very sentence like? The alternative is eliminativism. Just do away with intentions. Of course, that's a pretty radical position to hold as folk psychology can no longer be said to be useful. In such a view talking about "John wanting ice cream" has as much to do with the mind as humors have to do with health.


Posted by gadianton2 at 7:19 AM
Updated: Wednesday, 18 October 2006 12:35 PM

View Latest Entries

« October 2006 »
S M T W T F S
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30 31
XML/RSS