# defining some terms

So, the notation I use is motivated by the bra-ket notation as used in quantum mechanics, and invented by the famous physicist Paul Dirac. Note though that the mathematics of my scheme and that of QM are vastly different.

Let's define the terms:
<x| is called a bra
|x> is called a ket
Essentially any text (currently ASCII only) can be inside bra/kets except <, |, > and \r and \n. Though we do have some conventions, more on that later.

Next, we have operators, that are again text.
The python defining valid operators is:
```def valid_op(op):
if not op.isalpha() and not op == '!':
return False
return all(c in ascii_letters + '0123456789-+!?' for c in op) ```
Next, we have what I call "superpositions" (again borrowed from QM). A superposition is just the sum of one or more kets.
eg:
|a> + |b> + |c>
But a full superposition can also have coeffs (I almost always write coefficients as coeffs).
3|a> + 7.25|b> + 21|e> + 9|d>

The name superpositions is partly motivated by Schrodinger's poor cat:
is-alive |cat> => 0.5 |yes> + 0.5 |no>

This BTW, is what we call a "learn rule" (though there are a couple of other variants).
They have the general form:
OP KET => SUPERPOSITION

Next, we have some math rules for all this, though for now it will suffice to mention only these:
```1) <x||y> == 0 if x != y.
2) <x||y> == 1 if x == y.
7) applying bra's is linear. <x|(|a> + |b> + |c>) == <x||a> + <x||b> + <x||c>
8) if a coeff is not given, then it is 1. eg, <x| == <x|1 and 1|x> == |x>
9) bra's and ket's commute with the coefficients. eg, <x|7 == 7 <x| and 13|x> == |x>13
13) kets in superpositions commute. |a> + |b> == |b> + |a>
18) |> is the identity element for superpositions. sp + |> == |> + sp == sp.
|a> + |a> + |a> = 3|a>
|a> + |b> + |c> + 6|b> = |a> + 7|b> + |c>```

And that is it for now. Heaps more to come!

Update: I guess you could call superpositions labelled, sparse vectors. By giving each element in a vector a name, it gives us the freedom to drop elements with coeff = 0. For large sparse vectors, this is a big win. For large dense vectors, there is of course a price to pay for all those labels. And since we have labels we can change the order of the elements without changing the meaning of the superposition. Say if we want to sort by coeff size. This is harder if you use standard unlabeled vectors. I guess the other thing to note about superpositions is that they allow you to define operators with respect to vector labels, and not vector positions.

Home
previous: announcing the semantic db project
next: context learn and recall

updated: 19/12/2016
by Garry Morrison
email: garry -at- semantic-db.org