Given the training data set D:

D = {(X1,Y1),(X2,Y2),...(Xn,Yn)}

where Xi, and Yi are superpositions (and must not be empty superpositions that have all coeffs equal to 0)

Then learn these rules:

pattern |node: 1> => X1

pattern |node: 2> => X2

...

pattern |node: n> => Xn

M |node: 1> => Y1

M |node: 2> => Y2

...

M |node: n> => Yn

Then given the unlabeled data set U = {Z1,Z2,...Zm}, where Zi are superpositions of the same type as Xi, learn these rules:

input-pattern |example: 1> => Z1

input-pattern |example: 2> => Z2

...

input-pattern |example: m> => Zm

Then here is one proposed h: X -> Y:

h |*> #=> M drop-below[0.7] similar[input-pattern,pattern] |_self>

Here is another proposed h: X -> Y:

h2 |*> #=> coeff-sort M similar[input-pattern,pattern] |_self>

A worked example coming up next!

Update: I suspect for more complex data sets we will need something like this:

h |*> #=> coeff-sort M some-complex-function similar[input-pattern,pattern] |_self>

for some as yet unknown "some-complex-function" operator.

Update: I suspect we can use this scheme to implement large numbers of if/then rules. eg perhaps:

"if Xi then Yi" using:

if-pattern |node: i> => Xi

M |node: i> => Yi

h |*> #=> M drop-below[t] similar[input-pattern,if-pattern] |_self>

Update: here is a proof of concept:

sa: dump ---------------------------------------- |context> => |context: sw console> if-pattern |node-1> => |Fred is whistling> M |node-1> => |Fred is happy> if-pattern |node-2> => |Sam is whistling> M |node-2> => |Sam is happy> input-pattern |x> => |Fred is whistling> input-pattern |y> => |Sam is whistling> h |*> #=> M similar[input-pattern,if-pattern] |_self> ---------------------------------------- sa: h |x> |Fred is happy> sa: h |y> |Sam is happy>Looks good. And of course, this means we can easily load up large numbers of rules like this, and pass them around the internet as sw files.

Update: let's tidy up our proof of concept example using similar-input[op] and if-then machines:

---------------------------------------- |context> => |context: simple if-then machine> pattern |node: 1> => |Fred is whistling> then |node: 1> => |Fred is happy> pattern |node: 2> => |Sam is whistling> then |node: 2> => |Sam is busy> implies? |*> #=> then similar-input[pattern] |_self> ----------------------------------------Now put it to use:

sa: implies? |Fred is whistling> |Fred is happy> sa: implies? |Sam is whistling> |Sam is busy>OK. Very simple example. But I hope it shows something interesting. And with a little imagination it could be used for all sorts of things. And note in this example I used single kets as the patterns and consequence, but with not much work we could do similar using superpositions. Go see the if-then machines examples for that. The next question is how do we auto learn these kinds of rules? We don't want to go the cyc route and hand define everything! Certainly we would expect a full AGI to be able to auto learn rules. But can we have auto learning and yet not have a full AGI?

Home

previous: supervised pattern recognition

next: supervised learning of iris classes

updated: 19/12/2016

by Garry Morrison

email: garry -at- semantic-db.org