Thank you Joel. I gave each node in the cluster 24g of heap and it ran,
but then failed on the 50th iteration (was trying to do 1,000).
This time, I have the error on the node and the exception from the
client running the stream command. The node (Doris) has 3 errors that
occurred at the sam
Hi Joe,
Currently you will eventually run into memory problems if the training sets
gets too large. Under the covers on each node it is creating a matrix with
a row for each document and a column for each feature. This can get large
quite quickly. By choosing fewer features you can make this matri
I tried to build a large model based on about 1.2 million documents.
One of the nodes ran out of memory and killed itself. Is this much data
not reasonable to use? The nodes have 16g of heap. Happy to increase
it, but not sure if this is possible?
Thank you!
-Joe
On 4/5/2018 10:24 AM, Jo
Thank you Shawn - sorry so long to respond, been playing around with
this a good bit. It is an amazing capability. It looks like it could
be related to certain nodes in the cluster not responding quickly
enough. In one case, I got the concurrent.ExecutionException, but it
looks like the root
On 4/2/2018 1:55 PM, Joe Obernberger wrote:
> The training data was split across 20 shards - specifically created with:
> http://icarus.querymasters.com:9100/solr/admin/collections?action=CREATE&name=MODEL1024_1522696624083&numShards=20&replicationFactor=2&maxShardsPerNode=5&collection.configName=T
Hi Joel - thank you for your reply. Yes, the machine (Vesta) is up, and
I can access it. I don't see anything specific in the log, apart from
the same error, but this time to a different server. We have constant
indexing happening on this cluster, so if one went down, the indexing
would stop
It looks like it accessing a replica that's down. Are the logs from
http://vesta:9100/solr/MODEL1024_1522696624083_shard20_replica_n75 reporting
any issues? When you go to that url is it back up and running?
Joel Bernstein
http://joelsolr.blogspot.com/
On Mon, Apr 2, 2018 at 3:55 PM, Joe Obernber