towards a definition of intelligence
So, have we done enough work that we can now make a reasonable guess at a definition of intelligence? Let's see. In my travels I have seen one definition being, an intelligent agent, given a current situation, will manipulate things so as to maximize the number of potential future states. So, if such an agent is stuck in a valley, it will climb to the top of the hill to maximize its potential pathways.
Mathematically, roughly (in a simplified 1 dimension):
F = d/dx V(x)
where V(x) is the landscape, and F is the direction you want to head.
I have an alternate definition:
1) given the agents current state (as represented by some superposition) find a pathway (as represented by some operator sequence) to your desired state (again, represented by some superposition). The quicker the agent can do this, and the shorter the pathway, then the more intelligence points we give that agent. Noting that for sufficiently hard problems, most agents won't be able to find a pathway at all.
2) given an object the agent wishes to understand, how well constructed is the agents internal representation of that object. At one extreme we have rote learning, say you recall an objects definition word for word, with essentially no understanding. At the other we have a very dense network linking the object with the rest of the knowledge in the agents memory store. The denser the network, the more intelligence points we give that agent. And I suppose we should give some points for speed as well.
Comments:
1) the above is somewhat dependent on the agent already have a large body of knowledge. This isn't perfect since young children do not have as much knowledge as an adult, but in some regards are far more intelligent than adults. Frankly, it is hard work to boot-strap from nothing to a thorough understanding of the world.
2) if you ever watch Richard Feynman talk, it is obvious he had a very dense network representation of physics in his head. Everything was linked to everything. This gives him lots of (2) points in my scheme, but then he was a physics genius!
3) OK. So how do we build an intelligent agent? Heh. No one knows!! My guess is that it requires at least three components: 1) a processing center (eg the neocortex), 2) a memory system (eg the hippocampus), and 3) an attention system (eg the thalamus). I personally think the attention system is the most important of the three. We need some system to filter and only attend to what is currently important, and to dynamically change attention as needed. Indeed, this sounds an awful lot like a von Neumann architecture computer! With CPU, RAM and instruction pointer (as the attention system). But in detail they are quite different. Especially the attention system, what I have in mind is a lot more involved than an IP.
3) superpositions and operator sequences should be sufficient to represent any current state, or pathway between states, of interest. That being my main thesis of the project! Is there anything that can't be represented this way? I don't know. But the implication would be that a human brain couldn't represent it either.
Home
previous: new operators guess ket and guess operator
next: image histogram similarity
updated: 19/12/2016
by Garry Morrison
email: garry -at- semantic-db.org