introducing active buffer
Here is yet another use of our simm. The idea is there seems to be various layers in the brain that have an active buffer of recent kets, and once the buffer is full, it is processed. When reading letters that buffer fills up with individual letters, and then when a non-letter char appears, the brain processes the buffer to decide what word it has just seen. Then the next level up is sentence fragments, the buffer fills with words until it sees punctuation (eg, comma or dot). And so on. Another example is in conversation with a person. You build up a buffer of words/sentences, then when they have made their point, you process that buffer.
Here is the usage:
-- uses "" as the default pattern.
active-buffer[N,t] some-superposition
-- uses your chosen pattern (we can't use "" as the pattern, due to broken parser!)
active-buffer[N,t,pattern] some-superposition
where:
N is an int -- the size of the active buffer
t is a float -- the drop below threshold
pattern is a string -- the pattern we are using
Here is the python:
def console_active_buffer(one,context,parameters): # one is the passed in superposition
try:
N,t,pattern = parameters.split(',')
N = int(N)
t = float(t)
except:
try:
N,t = parameters.split(',')
N = int(N)
t = float(t)
pattern = ""
except:
return ket("",0)
one = superposition() + one # make sure one is a superposition, not a ket.
result = superposition()
data = one.data
for k in range(len(data)):
for n in range(N):
if k < len(data) - n:
y = superposition()
y.data = data[k:k+n+1] # this is the bit you could call the buffer.
result += context.pattern_recognition(y,pattern).drop_below(t)
return result
Yeah, the code is a bit ugly! Not currently sure the best way to tidy it. Probably when I have working sequences.
Some notes:
1) this is implemented using superpositions, but really it would work better with sequences. Unfortunately I haven't fully worked out how sequences will work, let alone written any code.
2) The code above uses fixed sized buffers (N). It seems likely in the brain there are end-buffer kets that signal the end of the buffer and trigger processing of the buffer, rather than a fixed size. eg, for letters non-word chars trigger processing of a word.
3) the general supervised pattern recognition algo somewhat decreases the use case for this code.
Examples in the next post!
Home
previous: spike fourier transform using simm
next: using active buffer
updated: 19/12/2016
by Garry Morrison
email: garry -at- semantic-db.org