generating random grammatically correct sentences

In the last post we looked at generating a short grammatically correct sentence, in a proposed brain like way. The central idea was to represent our sentences using only classes and sequences. It's classes and sequences all the way down (not turtles!). In this post we extend this, and introduce a clean minimalist notation to represent theses sequences and classes, and a "compiler" of sorts, that converts this notation back to BKO. I guess with the implication that BKO could be considered a sort of assembly language for the brain.

Now on to this new notation (which is somewhat similar to BNF). We have these foundational objects:
{}                     -- the empty sequence
A                      -- a sequence of length one
A.B.C                  -- a sequence
{A, B, C}              -- a class
A = {B, C.D.E, F, G.H} -- definition of a class of sequences
I = A.B.C.D            -- definition of a sequence of classes
And that is pretty much it! Perhaps it would help to show how these map back to BKO:
-- the empty sequence:
pattern |node 0: 0> => random-column[10] encode |end of sequence>

-- a sequence of length one:
-- boy
pattern |node 5: 0> => random-column[10] encode |boy>
then |node 5: 0> => random-column[10] encode |end of sequence>

-- a sequence of length three:
-- the . woman . saw
pattern |node 1: 0> => random-column[10] encode |the>
then |node 1: 0> => random-column[10] encode |woman>

pattern |node 1: 1> => then |node 1: 0>
then |node 1: 1> => random-column[10] encode |saw>

pattern |node 1: 2> => then |node 1: 1>
then |node 1: 2> => random-column[10] encode |end of sequence>

-- a sequence of classes:
-- L = A.K.B
pattern |node 20: 0> => random-column[10] encode |A>
then |node 20: 0> => random-column[10] encode |K>

pattern |node 20: 1> => then |node 20: 0>
then |node 20: 1> => random-column[10] encode |B>

pattern |node 20: 2> => then |node 20: 1>
then |node 20: 2> => random-column[10] encode |end of sequence>

-- a class of one sequence:
-- A = {the.woman.saw}
start-node |A: 0> => pattern |node 1: 0>

-- a class of three sequences:
-- E = {{}, old, other}
start-node |E: 0> => pattern |node 0: 0>
start-node |E: 1> => pattern |node 6: 0>
start-node |E: 2> => pattern |node 7: 0>
Now that is in place, we can consider our first sentence:
\$ cat gm-examples/first-sentence.gm
A = {the}
B = {{}, old, other}
D = {{}, young}
E = {child}
F = {youngest, eldest}
G = {child, sibling}
H = {{}, on.the.hill, also}
I = {used.a.telescope}

J = B.C
K = D.E
L = F.G

M = {J, K, L}

N = A.M.H.I
Then we compile this back to BKO using gm2sw.py:
\$ ./gm2sw.py gm-examples/first-sentence.gm > sw-examples/first-sentence.sw
Load it up in the console:
\$ ./the_semantic_db_console.py
Welcome!

-- switch off displaying "info" messages:
sa: info off

-- find available "sentences":
sa: rel-kets[sentence]
|J> + |K> + |L> + |N>

-- recall the "N" sentence:
sa: recall-sentence sentence |N>
|the>
|man>
|used>
|a>
|telescope>
|end of sequence>

-- and again:
sa: .
|the>
|young>
|child>
|also>
|used>
|a>
|telescope>
|end of sequence>

-- and again:
sa: .
|the>
|old>
|woman>
|used>
|a>
|telescope>
|end of sequence>
Now for a slightly more interesting sentence:
\$ cat gm-examples/the-woman-saw.gm
A = {the.woman.saw}
B = {through.the.telescope}
C = {{}, young}
D = {girl, boy}
E = {{}, old, other}
G = E.F
H = {the}
I = H.C.D
J = H.E.F
K = {{},I,J}

L = A.K.B

M = {I,J}
N = {saw}
O = M.N.K.B

P = {through.the}
Q = {telescope, binoculars, night.vision.goggles}

R = M.N.K.P.Q
\$ ./gm2sw.py gm-examples/the-woman-saw.gm > sw-examples/the-woman-saw.sw
\$ ./the_semantic_db_console.py
sa: rel-kets[sentence]
|G> + |I> + |J> + |L> + |O> + |R>

sa: recall-sentence sentence |R>
|the>
|boy>
|saw>
|the>
|old>
|woman>
|through>
|the>
|telescope>
|end of sequence>

sa: .
|the>
|saw>
|the>
|old>
|woman>
|through>
|the>
|binoculars>
|end of sequence>

sa: .
|the>
|old>
|man>
|saw>
|through>
|the>
|night>
|vision>
|goggles>
|end of sequence>

sa: .
|the>
|woman>
|saw>
|the>
|young>
|boy>
|through>
|the>
|binoculars>
|end of sequence>

sa: .
|the>
|girl>
|saw>
|through>
|the>
|telescope>
|end of sequence>
While we have this knowledge loaded, we can also do things like randomly walk individual sub-elements of our full sentences:
sa: recall-sentence pattern pick-elt rel-kets[pattern]
|the>
|man>
|end of sequence>

sa: .
|other>
|end of sequence>

sa: .
|boy>
|end of sequence>

sa: .
|girl>
|end of sequence>

sa: .
|saw>
|through>
|the>
|night>
|vision>
|goggles>
|end of sequence>

sa: .
|binoculars>
|end of sequence>

sa: .
|end of sequence>

sa: .
|telescope>
|end of sequence>

sa: .
|saw>
|the>
|young>
|girl>
|through>
|the>
|telescope>
|end of sequence>
So at this point it might be a bit opaque how recall-sentence unpacks our stored sentences. Essentially it walks the given sentence, ie sequence, and if an element in that sequence is a class (ie, has a start-node defined), then recursively walk that sub-sequence, else print the element name. For example, recall this knowledge and consider the high level sequence R:
\$ cat gm-examples/the-woman-saw.gm
A = {the.woman.saw}
B = {through.the.telescope}
C = {{}, young}
D = {girl, boy}
E = {{}, old, other}
G = E.F
H = {the}
I = H.C.D
J = H.E.F
K = {{},I,J}

L = A.K.B

M = {I,J}
N = {saw}
O = M.N.K.B

P = {through.the}
Q = {telescope, binoculars, night.vision.goggles}

R = M.N.K.P.Q
So if we walk the R sequence, with no recursion, we have:
sa: follow-sequence sentence |R>
|M>
|N>
|K>
|P>
|Q>
|end of sequence>
But each of these elements are themselves classes. Here are the sequences in the M, N and K classes:
sa: follow-sequence start-node |M: 0>
|H>
|C>
|D>
|end of sequence>

sa: follow-sequence start-node |M: 1>
|H>
|E>
|F>
|end of sequence>

sa: follow-sequence start-node |N: 0>
|saw>
|end of sequence>

sa: follow-sequence start-node |K: 0>
|end of sequence>

sa: follow-sequence start-node |K: 1>
|H>
|C>
|D>
|end of sequence>

sa: follow-sequence start-node |K: 2>
|H>
|E>
|F>
|end of sequence>
And if a class contains more than one member, the sub-sequence to recursively walk is chosen randomly. And so on, until you have objects with no start-nodes, ie low level sequences. Heh. I don't know if that explanation helped. This is the full python that defines the recall-sentence operator:
# Usage:
# print-sentence |*> #=> recall-sentence pattern |_self>
# print-sentence |node 200: 1>
#
# one is a sp
def recall_sentence(one,context):
if len(one) == 0:
return one
current_node = one

def next(one):
return one.similar_input(context,"pattern").select_range(1,1).apply_sigmoid(clean).apply_op(context,"then")

def name(one):
return one.apply_fn(extract_category).similar_input(context,"encode").select_range(1,1).apply_sigmoid(clean)

def has_start_node(one):                                            # check if one is a class
two = ket(one.the_label() + ": ")
return len(two.apply_fn(starts_with,context).select_range(1,1).apply_op(context,"start-node")) > 0

def get_start_node(one):
two = ket(one.the_label() + ": ")
return two.apply_fn(starts_with,context).pick_elt().apply_op(context,"start-node")

while name(current_node).the_label() != "end of sequence":
if not has_start_node(name(current_node)):
print(name(current_node))
else:
start_node = get_start_node(name(current_node))
recall_sentence(start_node, context)
current_node = next(current_node)
return ket("end of sequence")
Now, just for fun we can visualize our sentence structure, which is essentially a complex network, using our sw2dot code.
\$ ./the_semantic_db_console.py
sa: save the-woman-saw--saved.sw
sa: q

\$ grep -v "^full" sw-examples/the-woman-saw--saved.sw | grep -v "^support" > sw-examples/the-woman-saw--tidy.sw

\$ ./sw2dot-v2.py sw-examples/the-woman-saw--tidy.sw
Open that in graphviz, using neato and we have:
Now some notes:
1) Because of the recursive nature of the recall-sentence operator it should, baring a bug, handle multiple levels of sequences and classes, in contrast with the simpler example in the last post that was restricted to one level of classes and sequences. Potentially allowing for very complex structures, and certainly longer text than single sentences.
2) Even with our short-cut notation defining sentences is still somewhat hard work. The eventual goal is for it to be learnt automatically. A hard task, but having sentence representation is at least a useful step in that direction.
3) So far our classes and sequences have been small. I suspect classes will always remain small, as grammar has strict rules that seem to require small classes. Sequences on the other hand I don't know. Presumably larger structures than single sentences would need longer sequences, but the fact that the brain uses chunking hints that those sequences can't be too long. So instead of a large structure using long sequences, instead it would use more levels of shorter sequences. Which is essentially what chunking does. Indeed, here is our chunked sequences example in our new gm notation:
\$ cat gm-examples/alphabet-pi.gm
a1 = {A.B.C}
a2 = {D.E.F}
a3 = {G.H.I}
a4 = {J.K.L}
a5 = {M.N.O}
a6 = {P.Q.R}
a7 = {S.T.U}
a8 = {V.W.X}
a9 = {Y.Z}

alphabet = a1.a2.a3.a4.a5.a6.a7.a8.a9

p1 = {3.1.4}
p2 = {1.5}
p3 = {9.2}
p4 = {6.5}
p5 = {3.5}
p6 = {8.9}

pi = p1.p2.p3.p4.p5.p6
4) What other objects can we represent, other than grammatical sentences, using just classes and sequences? Something I have been thinking about for a long time now is, how would you represent the knowledge stored in a mathematicians head? My project is claiming to be about knowledge representation right, so why not mathematics? I don't know, but I suspect we wont have an artificial mathematician until well after we have a full AGI.
5) The other side of that is, what can't we represent using just classes and sequences? I don't know yet. But certainly long range structure might be part of that. Given a random choice at the start of a sentence sometimes has an impact on what is valid later on in that sentence. I don't think we can represent that. And that leads to my last point. Fixed classes and random choice are just the first step. In a brain, the set of available classes to compose your sentences from, are dynamic, always changing, and if you want to say anything meaningful, your choices of how to unpack a sentence are the opposite of random.
6) Approximately how many neurons in our "the-woman-saw.gm" example? Well, we have:
sa: how-many rel-kets[pattern]
|number: 44>

sa: how-many starts-with |node >
|number: 44>

sa: how-many rel-kets[start-node]
|number: 23>
So roughly 67 neurons. Though that doesn't count the neurons needed to recall the sentences that corresponds to our python recall-sentence operator.

Home
previous: learning and recalling a simple sentence
next: predicting sequences

updated: 19/12/2016
by Garry Morrison
email: garry -at- semantic-db.org