how to recover from OpenSearcher called on closed core

2017-09-28 Thread rubi.hali
Hi

We are using solr6.1 version and have a master slave setup.

We have one master and two slaves .

We have enabled replication poll on slaves at an interval of 300s which
results into an error
and says *Index Fetch Failed : Open NewSearcher Called on closed core*

And our commit strategy involves both hardcommit and softcommit
our hardcommit conditions are
maxDocs 25000
maxTime 6
and OpenSearcher is false for master but in case of slaves it is true


and our Softcommit involves 30 as soft commit time.

Please let me know if this error is due to any of the configurations we have
done.

Analysis from logs is that Caching Directory is closing the core and when
replication happens it starts throwing the error.

Thanks in advance












--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


how to recover from OpenSearcher called on closed core

2017-09-28 Thread rubi.hali
Hi

we are using Solr 6.1.0 version. We have done a Master/Slave Setup where in
Slaves we have enabled replication polling after 300 seconds

But after every replication poll, we are getting an error : Index Fetch
Failed: opening NewSearcher called on closed core.

We have enabled  softcommit after 30 ms and hardcommit with 25000 docs
and 6 secs 
In slaves we have kept opensearcher true in case of hardcommit.

we are really not sure if this issue has anything to do with our commit
strategy.

Please let me know if there is any possible explanation for why this is
happening. 

>From logs analysis , I observerd Caching Directory Factory is closing the
core and after that Replication Handler starts throwing this exception.

Does this exception will have any impact on memory consumption on slaves??



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: how to recover from OpenSearcher called on closed core

2017-09-28 Thread rubi.hali
Hi Nawaz

No we are not doing any upgradation.

We hardly have 3 documents so we dont feel the need of having a cloud
configuration

Regarding d exception we analyzed before this error comes We always see
Cahching Directory Factory closing the core

Plus we tried Solr 6.2 version and the same exception was not happening

Do you have any idea if this issue or any such issue with replication exists
in 6.1 which was resolved in 6.2

Thanks in advance



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: how to recover from OpenSearcher called on closed core

2017-09-29 Thread rubi.hali
Hi Eric

No its not NFS. It is ext4.

Our solr is going again and again into OOM error. I am really not sure will
it be because of this exception. But what I observed was Whenever we will
increase polling interval, heap memory will grow at slower rate as compared
to lesser polling interval.

But we were not able to reproduce the same error on solr 6.2.0. And what I
noticed from logs was every time IndeXfetcher gets called
CachingDirectoryFactory was opening the core in 6.2 but the same doesnt
happen in 6.1.0 Verion

Let me know if there is something else to this issue as Our chances of
upgrading the same are very less till we are sure its not any configuration
issue.







--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: No inserts in Query Result Cache

2018-01-24 Thread rubi.hali
Did some further digging and found that as grouping is enabled query result
cache is not having any inserts . Only disabling grouping adds an entry in
query result cache. Is there a way we can cache grouped results because as
per wiki there is parameter group.cache.percent but then again it doesnt
work with term and default queries. Can Someone please help what should be
the possible approach in this case. 



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


High CPU and Physical Memory Usage in solr with 4000 user load

2018-02-08 Thread rubi.hali
Hi

We are using Solr-6.6.2 with one master and 4 Slaves each having 4 CPU core
and 16GB RAM. We were doing load testing with 4000 users and 800 odd search
keywords which resulted into 95% of CPU usage in less than 3 minutes and
affected our QUERY Responses. There was spike in physical memory also which
was not going down even when we stopped sending load.

Our JVM Heap given to Solr is 8G which still lefts with 8G for OS.

We have 4 lakh documents in solr.

Our Cache Configurations done in SOLR are



 




We have enabled autocommit 
   
   ${solr.autoCommit.maxDocs:25000}
   ${solr.autoCommit.maxTime:6} 
   true (This is true only in case of
slaves) 
 

 We are also doing softcommit
  
   ${solr.autoSoftCommit.maxTime:30} 
 

Our Queries are enabled with Grouping so Query Result Cache doesnt get used.
But still in heavy load we are seeing this behaviour which is resulting into
high response times.

Please suggest if there is any configuration mismatch or os issue which we
should resolve for bringing down our High Response Times
















--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: High CPU and Physical Memory Usage in solr with 4000 user load

2018-02-09 Thread rubi.hali
Hi Shawn

We tried one more round of testing after We increased CPU cores on each
server from 4 to 16 which amounts to 64 cores in total against 4 slaves.But
Still CPU Usage was higher.So We took the thread dumps on one of our slaves
and found that threads were blocked. Have also attached them.

As far as document size is considered, we have only 1.5 GB index(only one)
size amounting to 4 lakh docs  on each server and query load in span of 10
minutes on each of slaves was distributed as
slave 1- 1258
slave 2 - 512
slave 3 - 256
slave 4 - 1384

We are using Linux OS. 

As threads are blocked , our cpu is reaching to 90% even even when heap or
os memory is not getting used at all.

One of the BLOCKED Thread Snippet
Most of the threads are blocked either on Jetty Connector or FieldCacheIMPL
for DOC values.  Slave1ThreadDump5.txt
  
Slave1ThreadDump4.txt
  

"qtp1205044462-26-acceptor-1@706f9d47-ServerConnector@7ea87d3a{HTTP/1.1,[http/1.1]}{0.0.0.0:8983}"
#26 prio=5 os_prio=0 tid=0x7f2530501800 nid=0x4f9c waiting for monitor
entry [0x7f2505eb8000]
   java.lang.Thread.State: BLOCKED (on object monitor)
at
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:233)
- waiting to lock <0x00064067fc68> (a java.lang.Object)
at
org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:373)
at
org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:601)
at
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
at
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
at java.lang.Thread.run(Thread.java:748)

   Locked ownable synchronizers:
- None




--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: High CPU and Physical Memory Usage in solr with 4000 user load

2018-02-13 Thread rubi.hali
Hi Shawn

As asked, have attached the gc log and snapshot of top command
TopCommandSlave1.jpg
   and 
regarding blocked threads, we are fetching facets and doing grouping with
the main query and for those fields docValues were not enabled. So we did
enable docValues for the same and saw there were no such blocked threads
now.

Regarding Custom Handler, Its just a wrapper which is changing  qf parameter
based on some conditions so that should not be a problem. 

Now CPU usage also came down to 70% but still there is  a concern of
continuous spike in physical memory even though our heap is not getting
utilized that much.

Also there are blocked threads on QueueThreadPool ?? Not sure if this is an
issue or its expected as they should get released immediately.
solr_gc.current
  




--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Caching Solr Grouping Results

2018-05-18 Thread rubi.hali
Hi All

Can somebody please explain if we can cache solr grouping results in query
result cache as i dont see any inserts in query result cache once we enabled
grouping?



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: Caching Solr Grouping Results

2018-05-20 Thread rubi.hali
Hi Yasufumi

Thanks for the reply. Yes, you are correct. I also checked the code and it
seems the same. 

We are facing performance issues due to grouping so wanted to be sure that
we are not leaving out any possibility of caching the same in Query Result
Cache.

was just exploring field collapsing instead of grouping but It doesn't
fulfill our requirement of having all values in a facet for a group instead
of only grouped value. So we dont have any other choice but to go with
grouping and may be write some custom cache provider :(





--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html