train of thought

Now I have explained supported-ops, apply(), and create inverse, I can show you "train of thought".
Way back when I first started this project, it looked like this:
-- simple code to approximately represent the concept of "train of thought"
train-of-thought[|X>,n] ::= [
|ket> => |X>
repeat[n][
  |op> => pick-elt:supported-ops|ket>
  |ket> => pick-elt:apply(|op>,|ket>)
  Output:|ket>
]
return |ket>
]
(yeah, for a while operators were separated by colons, now by spaces) 

Now, in python it looks like:
# train-of-thought[n] some-superposition
# eg: train-of-thought[20] |colour: red>
#
# where n is an int.
def console_train_of_thought(one,context,n):
  try:
    n = int(n)
  except:
    return ket("",0)

  print("context:",context.name)
  print("one:",one)
  print("n:",n)
  X = one.pick_elt()
  print("|X>:",X)
  print()
  result = superposition()

  for k in range(n):
    op = X.apply_op(context,"supported-ops").pick_elt()  #   |op> => pick-elt supported-ops |X>
    X = X.apply_op(context,op).pick_elt()                #   |X> => pick-elt apply(|op>,|X>)
    result.data.append(X)                   
    print(X.display())
  return result                             # return a record of the train-of-thought 
 
I guess in words, start with a ket.
Look up the list of supported-ops.
Pick one (that's what pick-elt does).
Apply that operator to your starting ket.
Pick one element from that resulting superposition.
Loop.

The note is that for it to work well, you first need lots of data, and you need to have run create inverse.
The reason that is important is, if you don't have inverses you rapidly run into |>, and your train stops!
With inverses, it happily goes round in circles.

Let's give an example. Let's use the early US presidents data:
sa: load early-us-presidents.sw
sa: train-of-thought[20] |Washington>
context: sw console
one: |Washington>
n: 20
|X>: |Washington>
|year: 1793>
0.000|>
0.000|>
0.000|>
...
Doh! We forgot to run "create inverse" and our train of thought died.

Let's try again:
sa: create inverse
sa: train-of-thought[20] |Washington>
context: sw console
one: |Washington>
n: 20
|X>: |Washington>

|early US Presidents: _list>
|Adams>
|party: Federalist>
|Adams>
|number: 2>
|Adams>
|person: John Adams>
|US President: John Adams>
|person: John Adams>
|Adams>
|person: John Adams>
|Adams>
|early US Presidents: _list>
|Q Adams>
|early US Presidents: _list>
|Adams>
|party: Federalist>
|Adams>
|early US Presidents: _list>
|Jefferson>
4.000|early US Presidents: _list> + 7.000|Adams> + 2.000|party: Federalist> + |number: 2> + 3.000|person: John Adams> + |US President: John Adams> + |Q Adams> + |Jefferson>
So, it works, but is not super great. We really need a very large data-set for best results.

That's it for now!


Home
previous: inverse
next: add_learn and stored_rules

updated: 19/12/2016
by Garry Morrison
email: garry -at- semantic-db.org