Couldn't Sleep

Last night, I had one of those nights where I just lie awake, my mind buzzing. Not that that's entirely bad.

From doing a little research on a software engineering problem, I just discovered (and, hence, started reading) Godel, Escher, Bach: An Eternal Golden Braid.

I must say that I am surprised none of my undergraduate professors even mentioned this book. And, I took a course in A.I.

I don't quite know what it is about this book, but I feel compelled to read it with my browser tabs firmly set to Google and Wikipedia search pages. Hofstadter imbues his tangential back-stories with so much interest, that I feel like I'm missing some essential part of my knowledge of science and engineering history. Although, I must say, I never expected to really dig into musical theory to the point where I can distinguish the difference between a canon and a fugue (or even that they are mutually exclusive forms). Even more unexpected as merely gaining that basic knowledge, I was surprised to find a level of musical complexity that was far beyond my self-taught understanding of musical theory. I had never considered that an amazingly complex composition isn't merely made complex by the virtue of the number of notes/voices, the off-center key signatures, the tempo, or the dynamics. Within the primary example of the first chapter there are layers of complexity that fall outside of "wow, that's a technically complex piece." It's nearly impossible for my engineering brain to describe as well as it should be explained. So, read the book. Seriously.

Adding to my strange night, my brain almost went into some kind of biological spinlock thinking of the possibilities of self-referential systems. It's a strange experience, indeed, to use my own teetering bulb of dread and dream to try to work out how scientists and engineers try to mimic our minds. The model with which I'm most familiar is the perceptron. This model and its extensions have been relatively successful in producing some truly amazing mimicry of what we think is thinking. Since a lot of us technical-types rely so heavily on our perception of the world, the perceptron model of pattern recognition and learning is very relatable.

But, it always bothered me that our research into perceptrons has largely been focused (as far as I am currently aware) on modeling the systems in software. Granted, there have been a few neural networks that were engineered from the ground-up, I think some of the basics are still missing. By "basics," I mean this to include how we implement computer hardware that is based on the standard instruction/register/memory architecture that anything called a "microprocessor" must implement. I have a strong feeling that computer science and engineering will have to push technology far outside of our current comfort zones if we are going to make progress in modeling thought.

Now, I'm a huge fan of using memory to store and control instructions in the same dimension. I think the genius that went into that very primal part of my engineering life doesn't get the attention and focus it deserves in an engineering education. That being said, I'm not entirely convinced even that can be a safe foundation for building a thinking machine. It brings up questions like, "Does my brain have an operational clock?" "Does my brain scale to accommodate new instructions (or whatever their biological equivalent might be)?" "Is there some sort of global address space from which our memories can be indexed and recalled?"

I have an intuitive feeling that the answers to these questions are no.

First, I don't believe we have a clock. There may be operations performed at the neuronal level that act like a flip-flop, but even then, it seems like we function as a 100% real-time feedback-and-response system. Just look at simple spinal-cerebellum reflexes. There's no asynchronicity involved.

Also, what the heck is a biological instruction? Certainly, we have a molecular instruction/interpreter system in place within our ribosomes. But, is there a much faster macro-scale equivalent in our brains? Does a certain pattern of electrical impulses spread across a group of neurons result in a thought or physical I/O? Must these patterns be localized to some sort of wet bus; perhaps in the limbic system or corpus collosum? This is just a wild guess, but: what an engineer calls an instruction has no direct analog in our minds. I think the decentralized properties of our brains precludes the idea of some sort of consistent instruction language.

I think this goes back to a simpler discussion about pattern recognition. Machines won't be able to mimic a mind based on how we build microprocessors, because microprocessors do a poor job of pattern recognition by themselves. And, that's not only what our brains are good at, it's how they operate at a fundamental level. Our brains are wired to be able to instantly recognize input signals. Within the blink of an eye, we unconsciously filter, classify, categorize, and prioritize those inputs. In many cases, our brain might even take action (output) based on those inputs before we can even consciously perceive what's coming in. But, at a deeper level, I believe our "instructions" or "symbols" or "language" or whatever are not a result of some kind of dictionary look-up. They are a very unique pattern within our individual network that can't be modeled on any kind of computer we currently use in common science and engineering.

A thinking machine really needs to be built from the hardware up. No clocks. No instructions. No layers isolating hardware and software. A thinking machine will be able to internally imprint some sort of pattern of connections, that then allows it to interpret additional patterns... possibly imprinting new patterns of connections...