I am using an HTMPrediction model on a sequence of inputs, predicting the next 
steps in the sequence. I am trying to establish a correlation between the 
activation of my columns and the signal I am modelling (or rather a high level 
version of the signal I am modelling). My use case is rather simple for now, I 
have 4 inputs that can take three different values, and I try to predict the 
next value for one (or more) of those inputs, using an SDRClassifierRegion. 

I have a simple sequence of inputs that is cyclic in nature, of length roughly 
300 (which has 10 cycles of 30 values each). The accuracy on this data is 
pretty good, around 90% when I use default values for parameters from another 
example, I think hot gym or CPU (2048 columns, 32 cells by column) -- I can 
post the rest if that is required. I changed the alpha of the SDR classifier to 
0.01.

Thing is, when I try to look at activated columns of my temporal memory (with 
the activationThreshold defined in parameters, which is 16), I find that there 
are almost no activated columns (I get 1-4 activated columns during my first 
epoch, then 0). There are around 40 activated *cells* out of 2048*32, which is 
also very low. I am looking at them using the following statements:

        tp = model._getTPRegion().getSelf()
        active_cells = tp._tfdr.infActiveState['t']
        active_columns = np.where(np.sum(active_cells, 1) >= 
tp.activationThreshold)[0]

Does anything seem wrong in my approach? 

Are activations getting weird because my problem has a very low dimensionality? 
I am using `'w':21` for my four encoded variables. 

Thanks.





---
[Visit 
Topic](https://discourse.numenta.org/t/htm-prediction-model-with-good-accuracy-but-low-activation-levels/3276/1)
 or reply to this email to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discourse.numenta.org/email/unsubscribe/7ac22b7f20429f210850d241952ec90b91b76bfa75dab04bb8ee1777019fc262).

Reply via email to