One of the passages in Daniel Dennett's From Bacteria to Bach and Back where he comes really really close to cracking the code of reality—or perhaps closer to blowing your mind away into smithermemes... The passage in question is found in the chapter "Consciousness as an Evolved User-Illusion"; its title is unpromising, "How do human brains achieve 'global' comprehension using 'local' competences?"
... but the going gets really good with the epigraphs—especially when the epigraphs get glossed and riffed in the text itself. The trip goes from a bon mot to a reflexive hall of mirrors in the funhouse of your brain, so... brace your self for a dive ride:
Language was given to men so that they could conceal their thoughts.
—Charles-Maurice de Talleyrand
Language, like consciousness, only arises from the need, the necessity, of intercourse with others.
—Karl Marx
Consciousness generally has only been developed under the pressure of the necessity for communication.
—Friedrich Nietzsche
There is no General Leslie Groves to organize and command the termites in a termite colony, and there is no General Leslie Groves to organize and command the even more clueless neurons in a human brain. How can human comprehension be composed of the activities of unncomprehending neurons? In addition to all the free-floating rationales that explain our many structures, habits, and other features, there are the anchored reasons we represent to ourselves and others. These reasons are themselves things for us, denizens of our manifest image alongside the trees and clouds and doors and cups and voices and words and promises that make up our ontology. We can do things with these reasons—challenge, reframe, abandon, endorse, disavow them—and these often covert behaviors would not be in our repertoires if we hadn't downloaded all the apps of language into our necktops. In short, we can think about these reasons, good and bad, and this permits them to influence our overt behaviors in ways unknown in other organisms.
The piping plover's distraction display or broken-wing dance gives the fox a reason to alter its course and approach her, but not by getting it to trust her. She may modulate her thrashing to hold the fox's attention, but the control of this modulation does not require her to have more than a rudimentary "appreciation" of the fox's mental state. The fox, meanwhile, need have no more comprehension of just why it embarks on its quest instead of continuing to reconnoiter the are. We, likewise, can perform many quite adroit and retrospectively justifiable actions with only a vague conception of what we are up to, a conception often swiftly sharpened in hindsight by the self- attribution of reasons. It's this last step that is ours alone.
Our habits of self-justification (self-appreciation, self-exoneration, self-consolation, self-glorification, etc.) are ways of behaving (ways of thinking) that we acquire in the course of filling our heads with culture-borne memes, including, importantly, the habits of self-reproach and self-criticism. Thus we learn to plan ahead, to use the practice of reason-venturing and reason-criticizing to presolve some of life's problems, by talking them over with others and with ourselves. And not just talking them over—imagining them, trying out variations in our minds, and looking for flaws. We are not just Popperian but Gregorian creatures (see chapter 5), using thinking tools to design our own future acts. No other animal does that.
Our ability to do this kind of thinking is not accomplished by any dedicated brain structure not found in other animals. There is no "explainer-nucleus" for instance. Our thinking is enabled by the installation of a virtual machine made of virtual machines made of virtual machines. The goal of delineating and explaning this stack of competences via bottom-up neuroscience alone (without the help of cognitive neuroscience) is as remote as the goal of delineating and explaining the collection of apps on your smartphone by a bottomúp deciphering of its hardware circuit design and the bit-strings in memory without taking a peek at the user interface. The user interface of an app exists in order to make the competence accessible to users—people—who can't know, and don't need to know, the intricate details of how it works. The user-illusions of all the apps stored in our brains exist for the same reason: they make our competences (somewhat) accessible to users—other people— who can't know, and don't need to know, the intricate details. And then we get to use them ourselves, under roughly the same conditions, as guests in our own brains.
*I have silently corrected a typo, or spello, substituting "peek" for "peak".
No hay comentarios:
Publicar un comentario