learning and recalling a simple sentence

In this post we are going to use HTM inspired sequences to learn a short and simple, grammatically correct, sentence. This is a nice follow on from learning to spell, and recalling chunked sequences. The key idea being that the brain stores sentences as sequences of classes, and when we recall a sentence we unpack that structure. So how do we implement this? Well, we can easily represent sequences, as seen in previous posts, and classes are simple enough. So the hard bit becomes finding an operator that can recall the sentence.

Let's start with this "sentence", or sequence of classes (dots are our short-hand notation for sequences):
A . X . B . Y . C
where we have these classes:
A = {the}
B = {man, woman, lady}
C = {used a telescope}
X = {{}, old, other}
Y = {{}, on the hill, also}
And that is enough to generate a bunch of grammatically correct sentences, by picking randomly from each class at each step in the sequence. And noting that {} is the empty sequence. How many sentences? Just multiply the sizes of the classes:
|A|*|X|*|B|*|Y|*|C| = 1*3*3*3*1 = 27
Now on to the code. First up we need to encode our objects that we intend to use in our sequences. Again, our encode SDR's are just 10 random bits on out of 2048 total:
full |range> => range(|1>,|2048>)
encode |end of sequence> => pick[10] full |range>

-- encode words:
encode |old> => pick[10] full |range>
encode |other> => pick[10] full |range>
encode |on> => pick[10] full |range>
encode |the> => pick[10] full |range>
encode |hill> => pick[10] full |range>
encode |also> => pick[10] full |range>
encode |the> => pick[10] full |range>
encode |man> => pick[10] full |range>
encode |used> => pick[10] full |range>
encode |a> => pick[10] full |range>
encode |telescope> => pick[10] full |range>
encode |woman> => pick[10] full |range>
encode |lady> => pick[10] full |range>

-- encode classes:
encode |A> => pick[10] full |range>
encode |B> => pick[10] full |range>
encode |C> => pick[10] full |range>
encode |X> => pick[10] full |range>
encode |Y> => pick[10] full |range>
Next, define our low level sequences of words, though most of them are sequences of length one:
-- empty sequence
pattern |node 1: 1> => append-column[10] encode |end of sequence>

-- old
pattern |node 2: 1> => random-column[10] encode |old>
then |node 2: 1> => append-column[10] encode |end of sequence>

-- other
pattern |node 3: 1> => random-column[10] encode |other>
then |node 3: 1> => append-column[10] encode |end of sequence>

-- on, the, hill
pattern |node 4: 1> => random-column[10] encode |on>
then |node 4: 1> => random-column[10] encode |the>

pattern |node 4: 2> => then |node 4: 1>
then |node 4: 2> => random-column[10] encode |hill>

pattern |node 4: 3> => then |node 4: 2>
then |node 4: 3> => append-column[10] encode |end of sequence>

-- also
pattern |node 5: 1> => random-column[10] encode |also>
then |node 5: 1> => append-column[10] encode |end of sequence>


-- the
pattern |node 6: 1> => random-column[10] encode |the>
then |node 6: 1> => append-column[10] encode |end of sequence>

-- man
pattern |node 7: 1> => random-column[10] encode |man>
then |node 7: 1> => append-column[10] encode |end of sequence>

-- used, a, telescope
pattern |node 8: 1> => random-column[10] encode |used>
then |node 8: 1> => random-column[10] encode |a>

pattern |node 8: 2> => then |node 8: 1>
then |node 8: 2> => random-column[10] encode |telescope>

pattern |node 8: 3> => then |node 8: 2>
then |node 8: 3> => append-column[10] encode |end of sequence>

-- woman
pattern |node 9: 1> => random-column[10] encode |woman>
then |node 9: 1> => append-column[10] encode |end of sequence>

-- lady
pattern |node 10: 1> => random-column[10] encode |lady>
then |node 10: 1> => append-column[10] encode |end of sequence>
Here is the easiest bit, representing the word classes:
-- X: {{}, old, other}
start-node |X: 1> => pattern |node 1: 1>
start-node |X: 2> => pattern |node 2: 1>
start-node |X: 3> => pattern |node 3: 1>

-- Y: {{}, on the hill, also}
start-node |Y: 1> => pattern |node 1: 1>
start-node |Y: 2> => pattern |node 4: 1>
start-node |Y: 3> => pattern |node 5: 1>

-- A: {the}
start-node |A: 1> => pattern |node 6: 1>

-- B: {man,woman,lady}
start-node |B: 1> => pattern |node 7: 1>
start-node |B: 2> => pattern |node 9: 1>
start-node |B: 3> => pattern |node 10: 1>

-- C: {used a telescope}
start-node |C: 1> => pattern |node 8: 1>
Finally, we need to define our sentence "A . X . B . Y . C", ie our sequence of classes:
-- A, X, B, Y, C
pattern |node 20: 1> => random-column[10] encode |A>
then |node 20: 1> => random-column[10] encode |X>

pattern |node 20: 2> => then |node 20: 1>
then |node 20: 2> => random-column[10] encode |B>

pattern |node 20: 3> => then |node 20: 2>
then |node 20: 3> => random-column[10] encode |Y>

pattern |node 20: 4> => then |node 20: 3>
then |node 20: 4> => random-column[10] encode |C>

pattern |node 20: 5> => then |node 20: 4>
then |node 20: 5> => append-column[10] encode |end of sequence>
And that's it. We have learnt a simple sentence in a proposed brain like way, just using sequences and classes. For the recall stage we need to define an appropriate operator. With some thinking we have this python:
# one is a sp
def follow_sequence(one,context,op=None):
  if len(one) == 0:
    return one
    
  def next(one):
    return one.similar_input(context,"pattern").select_range(1,1).apply_sigmoid(clean).apply_op(context,"then")
  def name(one):
    return one.apply_fn(extract_category).similar_input(context,"encode").select_range(1,1).apply_sigmoid(clean)    
    
  current_node = one  
  while name(current_node).the_label() != "end of sequence":
    if op == None:
      print(name(current_node))      
    else:
      name(current_node).apply_op(context,op)
    current_node = next(current_node)
  return ket("end of sequence")
And these operator definitions:
-- operators:
append-colon |*> #=> merge-labels(|_self> + |: >)
random-class-sequence |*> #=> follow-sequence start-node pick-elt starts-with append-colon |_self>
random-sequence |*> #=> follow-sequence start-node pick-elt rel-kets[start-node] |>
print-sentence |*> #=> follow-sequence[random-class-sequence] pattern |_self>
We can now recall our sentence:
$ ./the_semantic_db_console.py
Welcome!

sa: load sentence-sequence.sw
sa: info off
sa: print-sentence |node 20: 1>
|the>
|old>
|woman>
|used>
|a>
|telescope>
|end of sequence>

sa: .
|the>
|man>
|also>
|used>
|a>
|telescope>
|end of sequence>

sa: .
|the>
|old>
|man>
|on>
|the>
|hill>
|used>
|a>
|telescope>
|end of sequence>
And that's it. We now have a structure in place that we can easily copy and reuse for other sentences. The hard part is typing it up, and I have an idea how to help with that. The eventual goal would be for it to be fully automatic, but that will be difficult. For example, given this set of sentences:
"the man used a telescope"
"the woman used a telescope"
"the lady used a telescope"
"the old man also used a telescope"
"the other man on the hill used a telescope"
It feels plausible that that is enough information to learn the above classes and sequences. Some kind of sequence intersection, it seems to me. And if that were the case, it shows the power of grammatical structure. 5 sentences would be enough to generate 27 daughter sentences. For any real world example, the number of daughter sentences would be huge.

Next post a more complicated sentence, with several levels of sequences and classes.


Home
previous: learning and recalling chunked sequences
next: generating random grammatically correct sentences

updated: 19/12/2016
by Garry Morrison
email: garry -at- semantic-db.org