Sunday, May 4, 2008

We are a Strange Loop

So, I've been reading Doug Hofstadter's I Am a Strange Loop, and while I think Dougie (as he is sometimes called) is on to something, I'm not convinced yet that he really knows what he's on to exactly.
The book is largely (so far anyway, I'll let you know if there are any surprises) a re-telling of his earlier book, Godel, Escher, Bach, which is a detailed examination of Godel's Incompleteness Theorem, with an eye towards the magic of self-representation. Basically the idea is Ouroboros as the heart of consciousness: it is that we have the ability to model (imagine, understand, etc) ourselves that makes us conscious. That's not all that crazy of an idea; self-awareness is basically a synonym for consciousness, and it's had to imagine what consciousness without self-awareness would even mean. The exciting thing, for Mr. Hofstadter, is that this sort of self-representation is a necessary consequence of any symbol system of sufficient complexity, which, Hofstadter argues, follows from Godel's lovely theorem.
This consequence, that any (inherently meaningless*) symbol system of sufficient complexity (enough to deal with counting numbers: 1, 2, 3, 4, etc.) leads inexorably to self-representation strikes Hofstadter as the key to understanding consciousness. This I think is somewhere in the ballpark of the truth. What Hofstadter doesn't see though, are the consequences of being able to see consciousness as such a simple, universal event. He does see that this opens up the potential for A.I (he spent a great deal of GEB talking about that, but much less so in IAASL, at least so far), but he has a crucial assumption that completely covers up the most exciting thing about his way of thinking of minds: Hofstadter places individual, adult, human beings at the very top of his hierarchy of self-hood, awareness, and consciousness. This might not seem like a controversial decision to some people, but to me it seems insane; networks of human beings have a far greater potential for self awareness than any single human being ever could. The same way that an ant colony (which Hofstadter knows some stuff about) if far more self aware than any given ant (though still far less self aware than an given human being), a human colony is far more self-aware than any given human being. It's completely fascinating that Dougie has missed this (again, he may not have, and I might be speaking to soon, but this strikes me as a pervasive issue in his thought), because he does an enormous amount of thinking about networks of smaller things being outside of our perception just because of the size we are. He's ever so keen to recognize that the way we see things has everything to do with our size, but he only seems interested in seeing that there are things smaller than us, never bigger.
I suppose that part of what helps me see minds all over the place, big and small (that are really all part of the same big system, and therefore all part of the same big mind, the Homunculus is You and all that) is my thinking about the internet so much, but I think it has to do with something simpler than that too.

Just thinking about human relationships, and the fractal nature of our understanding each other.
Thinking of you
Thinking of me
Thinking of you.
We make a mind together, you and I, we we sit down and talk, or just interact in pretty much any way. We are self-aware of us together just as much as we are aware of each of us separately. Thomas Nagel, in his ever-so-famous (if not nearly enough), What it is like to be a bat, proposed that we think of conscious things as things that it would be like something to be that thing. He contrasts things like rocks and fungi, which he does not suppose it is like anything at all to be (or perhaps: it is just like being nothing), to things like bats and other people, which we often find ourselves wondering what it would be like to be. I think Thomas suffers from an impaired imagination, and I also think he does not understand the consequences of his own argument, which is mostly centered on showing that no mater how much we wonder what it is like to be a bat, we can never know at all. The thing is, it follows that we can't even know what sorts of things it is like something to be.
I propose a different solution. It is like something to be a part within a whole, and as a part in a whole, it is possible (I am tempted to say necessary) to interact with other parts within the whole. Now: it is never at all possible to draw a hard and fast distinction between some part of a whole and another part, because the interactions between them are indistinguishable from actions on the part of some bigger portion of the whole. [There may be simplest parts, each of which is a distinguishable individual, but apart from those, no composite part can be strictly delineated, because it is then composed of simple parts that must interact with BOTH composite parts of any pair of interacting composite parts. I'm only dealing with systems that do include composite parts, because systems without composite parts are boring, and I don't really think there are simplest parts of the whole we are a part of, so I am largely (except for this large parenthetical, that this parenthetical is nested in) ignoring simplest parts as well.] Then: there is always some ambiguity as to what part of the whole any given part is, and that ambiguity opens up the field of what it's like to be some other part. I can imagine what it is like to be a bat, because there is no real way to tell where the bat ends and I begin. Similarly, when you and I are together, it is impossible to draw any sharp distinction between what is you and what is me, and so it is no surprise that we start to use "we". Now: it is also always possible to make an arbitrary distinction between this thing and that thing, between you and I, but we must be careful to remember that such distinctions, while terribly useful, and while terribly difficult to avoid seeing, are no more and no less real than the distinction between a pawn and a bishop in a game of chess. We make the distinctions, and they are only as real as we make them. Once we see that, we can really wonder, and in some small way find out, what it is like to be a bat, or for that matter, a mountain, a grain of sand, an ant colony, a pair of humans, the Internet, Mother Nature, the Sun, or even The All. We are all connected, not in some mystical, wishy washy way, but just because of the nature of being a composite thing that is a small part of a great whole. Any system of sufficient complexity will inevitably include the possibility of self-awareness, and further, a system of sufficiently greater complexity will inevitably include the possibility of self-aware sub-systems.

Which reminds me of my proof that not even God can see everything. But I'll save that for next time.




*The logic that Godel worked with was definitionaly meaningless, requiring a separate interpretation schema, entirely divorced from the symbols themselves, to make any of the typographical marks anything more than pieces in a game, jumbling around according to the rules they are given (or really, composed of), churning out theorems while nobodies looking.



Edit: I may have spoke too soon, but I would hate to repeat the error, so I'm going to wait until I'm done with the book to speak on this again.

2 comments:

Maggie said...

That's the risk we take by speaking in the first place.

Ah well. Silly karma.

Maggie said...

http://xkcd.com/372/