I was thinking today about replicating the convoluted little chunk of grey matter sitting between our ears that we call our brain in a computer. Specifically, I was wondering about the sensory interpretation among the brain's sub-centers, which I hold loosely analogous with computer co-processors.
Now, it is a fact that certain parts of the brain are responsible for certain activities, many portions of the brain can be damaged or otherwise rendered nonfunctional without significantly debilitating the person who owns it. *1
For example if you were to suffer damage to your hippocampus region you would, as beautifully shown in the movie Memento, lose the ability to form new memories. Other parts of your brain are responsible for speech generation - clinical studies have shown people without this region functioning properly can write complete sentences and make sounds with their mouth but are unable to speak.
This indicates to me that the synthetic computer brain should be largely parallel processing. Different subsystems ranging in complexity would be responsible for different tasks. There might be different subsections reserved for functions, and sub-subsections further breaking down that task into smaller parts. For example, you might have a section handling sound and another handling vision, and each might be further divided: The sound section might have a separate CPU for resolving voice input and another for identifying music, and yet another for identifying sounds. The vision center might have a separate colour processor, an edge-finder to determine shapes, a center to evaluate spacial relationships, and so forth.
Another interesting phenomena is the separating of the two hemispheres of the brain. One hemisphere is typically responsible for the brain's perception of "I". This one side is responsible for the brain's generation of sense-of-self, where we might think of ourselves as individuals. Every time you think "I'm hungry" or "I don't want to go to work" it is a thought generated by one side of the brain, and not the other. There is a possibility of a similar process being carried out in the other hemisphere, resulting in a kind of mental siamese twin, but that is something for another discussion. Now then, if you sever the corpus callosum, which connects the right side s of the brain with the left, some communication between the two still takes place, but at a reduced level. It is possible for the side NOT responsible for the 'self image' generation to receive sensory input and still communicate, on some level with the other half.
Tests have been made where an image is shown to one eye while the other is closed, presenting a visual image to only one hemisphere of the brain. Normally the person being shown the object will be able to tell you what has been shown, regardless of which hemisphere (the dominant or submissive) has been shown the image. In patients where the corpus callosum is severed, the same test can be made with surprising results. If a patient's submissive hemisphere, the one NOT responsible for conciousness or sense of self, is shown an image, the patient is unable to tell you what the image was. However, they will be able to display an emotional response to this image. For example, when the submissive hemisphere is shown an image of a stranger performing violent acts, the patient is unaware of what is happening, but will feel angry or afraid or excited. When the other eye opens and the dominant hemisphere is able to see, the patient is introduced to the violent person the submissive hemisphere witnessed being violent. When subsequently asked about their feelings toward the person they've just met, patients typically respond that they don't like the person, or are uncomfortable or afraid when near them, but are unable to explain why.
I believe this to be evidence of a multi-layered communications network within the brain, a high-speed network where massive amounts of data are passed from subcenter to subcenter allowing each 'CPU' to process the same data. As well, there is a low-level network, probably running on a more primitive chunk of brain matter and at a slower speed, which facilitates the communication of symbolic data. It is far quicker to say "A blue car" than transfer a sensory map of the vehicle in question. In computer parlance, "a blue car" takes only ten bytes, or 80 individual bits of data *2. A visual picture of a car expressed in any appreciable detail would several hundreds of thousands of bytes, or millions of bits.
It is also possible, of course, that many centers of the brain don't require the full sensory input stream. I believe our memories work on both a symbolic and full-range level. What data is retained depends on the individual, everyone's brain is wired a little differently. When our blue car drives past a housewife, for example, she might remember only that it was, in fact, a car that was blue. This symbolic memory will carry links to other symbolic memories, so that upon recall ("What can you tell us about this car?") she has ready-made memories and assumptions like "Well, it had four tires, and it had glass windows, and oh yeah, the seats inside didn't match the blue paint." and so on. When the same car drives past a mechanic a similar process takes place, but the memory stored will probably be more detailed in many respects. This memory might include a more accurate and detailed picture as well as the symbolic memory. The mechanic might recall such details as the make and model of the car, whether it sounded in good repair or not, and upon this recollection the detailed image would arise that "Oh, and it had new paint on the rear panel."
It is a human function that we remember only what is interesting or different. This holds true for our sensation of time - we can more accurately remember things we find interesting, hence the mechanic's memory of the vehicle our housewife found rather uninteresting. In this same vein, a travelling salesman might have a more accurate perception fo the current date than a man on vacation. One person experiences many different things in a day, and his mental timeline might be punctuated with more details, and hence more accurate recall, than our man on the beach. It is the advantage of our computer system that the recall would be, if not faster, far more accurate, especially regarding the passage of time.
Now supposing we re-create the brain using subprocessors designed to mimic all this functionality. It might be a somewhat arbitrary decision to mimic the human brain, but at this point it's what we're familiar with and is our best chance, I believe, of creating functionally equivalent devices or intelligences. The first thing we might do is determine the protocol used for communication between nodes. A low-speed and high-tolerance symbolic network should run concurrently with a high-speed high-capactity network. Our visual processors could simultaneously deterimine, in different subsystems, that the object in front of us is white, round, and coming towards us very quickly at a height of about two meters. Several processes would be involved, and would be routed to different centers of the brain. The symbolic network would only need to carry a minimal set of data: "white ball, head height, incoming vector!" While this message is being routed to our synthetic hypothalamus for a 'fight or flight' decision, our short term memory would get the symbolic message as well, and so would our reflex centers, memory systems etcetera.
This message is going to be received and processed (or not) based on the data available. The hypothalamus, responsible for determining whether there is cause for fighting or running away from whatever we perceive, might decide no response is available based on the current data. Then the short term memory, which received the same data, broadcasts its message of "ongoing baseball game, incoming baseball". The reflex center, sort of a physical memory cache-ram, broadcasts a message like "The usual response is to hit the ball with the bat." Physical subcenters report that our position matches the long-term and reflex memory's 'ready state' and that a bat is in the hands. Reflex center says go, fight or flight response is 'fight', visual system have indicated the position and vector of the ball, reflex center streams a constantly updating movement request, the movement center signals the muscles, and so on.
Clearly the task is complicated, but it's an immensely interesting challenge, don't you think? The things we could learn from ourselves while recreating our most prized organ just stagger me. Coming up with various CPU requirements, localized cache-RAM, physical interconnects and network protocols... What a challenge! And the rewards would be staggering. Ignoring for a moment the intricate and perhaps unknowable mysteries of conciousness, we might still end up with incredible synthetic logic for planetary explorers, remote autonomous probes... The possibilities are staggering, to me. I love this shit.
*1 Naturally in this the phrase "significantly debilitating" depends on the individual's needs, requirements and priorities, but I use it interchangably with "won't kill you or make you any sort of invalid.
*2 A bit is a single datum consisting of a binary value. It can be only one of two states, commonly represented as a one or zero, on or off. This is the most basic form of data a computer deals with.