2016-10-20

Mussings on inteligence

Hello,

Here are some of my thoughts (a basic braindump) on the subject of intelligence. Though this blog is probably not the best place for a posting of such a subject (in that nobody who might understand what I mean will ever read this), but perhaps it is better to put my thoughts here rather than nowhere at all, on the off chance that someone will stumble upon this post trough Google.

Before I begin I must point out that while I am a software developer by profession, I haven't done much school on the subject and may very well be re-inventing the wheel.

The thing is, I've had various dealings with these subjects lately and my intuition has a way of stringing together information from various sources, to further my understanding of various vaguely defined concepts.

One of my projects as of late is to code an associative matrix for a certain piece of automation. Though I am relatively certain that nobody will be particularly happy with the end result, as it will quite literately have a mind of it's own, I'll get paid for it and that's all I care about that project at this point.

The thing I've of course been thinking about, is whether or not this associative matrix will have what it takes to form intelligence. I think I can safely say, without anyone checking my credentials, that the human brain is a neural network. Out of a certain historic fascination with the subject (neurology) and out of my understanding of the subject as a software developer -- rather than out of ignorance, I will say that neural networks are essentially associative matrices: They take the occurrence of a witnessed event and associate it with an end result, eventually being able to produce the end result by evoking the previously witnessed event. This is what enables us to learn how to problem-solve, and is the same thing that makes my code in the aforementioned project tick. This is fascinating, but not news.

What I've been thinking about, is if this associative matrix will unexpectedly produce the ability to learn things as we do, or not. I suspect that what is needed for true intelligence, is abstract reasoning, which can only come about if an associative matrix is ... recursive, if you get what I mean. Basically I think it needs to be able to make associations about associations. I am worried that such a structure might be unstable and occasionally produce nonsense, unless it's wired exactly right.

I am wondering if this facility in humans is one of those things that is genetically hardwired to work properly (and therefore also needs to be hardwired in my AI), or one of those things that any monolithic associative matrix can accomplish (and is therefore implementable in a generic fashion).

Let me try to present what I think I've been seeing. I have been, for a while now, fascinated with the canine olfactory cortex of my dog, which is a part of the brain that I as a human do not have. The fact that dogs can smell better than humans is not news, but the fact that they have an entire brain cortex dedicated to processing information from this very sensitive smell is fascinating. I wondered if I could possibly imagine what it's like. I was observing my dog enjoying the olfactory puzzles I can create by tossing a rock into a field of grass, so clearly there is some nontrivial processing taking place. And furthermore interesting how this cortex is right in the middle of the canine frontal cortex, which in itself must have interesting consequences.

The human enlarged frontal cortex is said to have developed "in order" to facilitate our understanding of social interactions between individuals in human society (apparently the size of this part of the brain can be correlated with the size of the group the primate in question lives in). The fact that canines base social status based on scent obviously cannot be a coincidence. What if the disappearance of the olfactory cortex in humans was what caused the brain matter in our frontal cortex to "fold in on itself", with the brain matter that would normally establish connections with the olfactory cortex, instead forming connections into itself, forming the basis for abstract reasoning?

If yes, though I guess it doesn't really matter what the historic facts are, I just need to code a single additional layer of, er, meta-association for my associative matrix to accurately mimic human reasoning. *grin*

No obviously it cannot be this simple. :) We will see.

LP,
Jure

No comments:

Post a Comment