All solr cores in a solr server are down. Cannot find anything from log.

2016-09-12 Thread forest_soup
We have a 3 node solrcloud. Each solr collection has only 1 shard and 1
replica. 
When we restart the 3 solr nodes, we found all the cores in one solr node
are at down state and not changed to other state. The solr node is shown in
/live_nodes in zookeeper.

After restarting all the zk server and solr nodes, the issue is resolved.

We do not see any clue from the solr.log.
2016-09-06 19:23:16.474 WARN  (main) [   ] o.e.j.s.h.RequestLogHandler
!RequestLog
2016-09-06 19:23:17.418 WARN  (main) [   ] o.e.j.s.SecurityHandler
ServletContext@o.e.j.w.WebAppContext@26837057{/solr,file:/opt/ibm/solrsearch/SolrG2Cld101/solr/server/solr-webapp/webapp/,STARTING}{/opt/ibm/solrsearch/SolrG2Cld101/solr/server/solr-webapp/webapp}
has uncovered http methods for path: /
2016-09-06 19:23:18.567 WARN  (main) [   ] o.a.s.c.SolrResourceLoader Can't
find (or read) directory to add to classloader: lib (resolved as:
/mnt/solrdata1/solr/home/lib).




--
View this message in context: 
http://lucene.472066.n3.nabble.com/All-solr-cores-in-a-solr-server-are-down-Cannot-find-anything-from-log-tp4295679.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: How to swap two cores and then unload one of them

2016-09-12 Thread Fabrizio Fortino
Hi George,

Thank you for getting back to me.

I am using Solr 6.

I need to use coreContainer because I have created a CoreAdminHandler
extension.

Thanks,
Fabrizio

On Sun, Sep 11, 2016 at 6:42 PM, Georg Sorst  wrote:

> Hi Fabrizio,
>
> which Solr version are you using? In more recent versions (starting with 5
> I think) you should not use the coreContainer directly but instead go
> through the HTTP API (which also supports the swap operation) or use SolrJ.
>
> Best,
> Georg
>
> Fabrizio Fortino  schrieb am Mo., 29. Aug. 2016 11:53:
>
> > I have a NON-Cloud Solr and I am trying to use the swap functionality to
> > push an updated core into production without downtime.
> >
> > Here are the steps I am executing
> > 1. Solr is up and running with a single core (name = 'livecore')
> > 2. I create a new core with the latest version of my documents (name =
> > 'newcore')
> > 3. I swap the cores -> coreContainer.swap("newcore", "livecore")
> > 4. I try to unload "newcore" (that points to the old one) and remove all
> > the related dirs -> coreContainer.unload("newcore", true, true, true)
> >
> > The first three operations are OK. But when I try to execute the last one
> > the Solr log starts printing the following messages forever
> >
> > 61424 INFO (pool-1-thread-1) [ x:newcore] o.a.s.c.SolrCore Core newcore
> is
> > not yet closed, waiting 100 ms before checking again.
> >
> > I have opened an issue on this problem (
> > https://issues.apache.org/jira/browse/SOLR-8757) but I have not received
> > any answer yet.
> >
> > In the meantime I have found the following workaround: I try to manually
> > close all the core references before unloading it. Here is the code:
> >
> > SolrCore core = coreContainer.create("newcore", coreProps)
> > coreContainer.swap("newcore", "livecore")
> > // the old livecore is now newcore, so unload it and remove all the
> > related dirsSolrCore oldCore = coreContainer.getCore("newCore")while
> > (oldCore.getOpenCount > 1) {
> >   oldCore.close()
> > }
> > coreContainer.unload("newcore", true, true, true)
> >
> >
> > This seemed to work but there is some race conditions and from time to
> time
> > I get a ConcurrentModificationException and then an abnormal CPU
> > consumption.
> >
> > I filed a separate issue on this
> > https://issues.apache.org/jira/browse/SOLR-9208 but this is not
> considered
> > an issue by the Solr committers. The suggestion is to move and discuss it
> > here in the mailing list.
> >
> > If this is not an issue, what are the steps to swap to cores and unload
> one
> > of them?
> >
> > Thanks a lot,
> > Fabrizio
> >
>


Re: Solr Configuration for Hortontworks HA

2016-09-12 Thread Mikhail Khludnev
Hello,

Giving
https://community.hortonworks.com/questions/1926/accessing-hdfs-in-namenode-ha-environment.html
you need just put path  without hdfs://host,
when I tried it, leading slash is ignored and the path is used as relative
from hdfs://ha-node:8020/user/{bash-user}
Then configure hdfs directory for a folder to read  HA configuration from
core-site.xml like /etc/hadoop/2.4.0.0-169/666/.
Let me know if it helps.

On Fri, Sep 9, 2016 at 5:25 PM, Heybati Farhad 
wrote:

> Hi All,
>
> We implement the Hortonworks Standby NameNode and i'm wondering how to
> configure the Solr to point to the cluster name instead of the Name node
> Hostname?
>
>  
> ?
>
> I tried to configure Dolr in several ways without succes:
> 1) Using the cluser name
> 2) using a "," separate host name of the both active and standby NameNode
> 3) using a ";" separate host name of the both active and standby NameNode
>
> Do you have anu suggestion?
>
> Thanks
> Regards
> Farhad
>



-- 
Sincerely yours
Mikhail Khludnev


[Solr facet distinct count] Benchmark and implementation details

2016-09-12 Thread Alessandro Benedetti
Hi gents,
was taking a look to the ways to calculate distinct count per facet.

Reading through Yonik blogs [1] it seems quite safe to assume the "
unique(field)" is the approach to go.

Do we have any benchmark or details about the implementation ?
Because as per Yonik blog it is faster than HyperLogLog so I assume it is
using different data structures and algorithms.
Worst case scenario I go through the code, but any presentation or blog
would be useful!
Cheers


[1] http://yonik.com/solr-count-distinct/ ,
http://yonik.com/facet-performance/
-- 
--

Benedetti Alessandro
Visiting card : http://about.me/alessandro_benedetti

"Tyger, tyger burning bright
In the forests of the night,
What immortal hand or eye
Could frame thy fearful symmetry?"

William Blake - Songs of Experience -1794 England


Unable to connect to correct port in solr 6.2.0

2016-09-12 Thread Preeti Bhat
HI All,

I am trying to setup the solr in Redhat Linux, using the 
install_solr_service.sh script of solr.6.2.0  tgz. The script runs and starts 
the solr on port 8983 even when the port is specifically specified as 2016.

/root/install_solr_service.sh solr-6.2.0.tgz -i /opt -d /var/solr -u root -s 
solr -p 2016

Is this correct way to setup solr in linux? Also, I have observed that if I go 
to the /bin/solr and start with the port number its working as expected but not 
as service.

I would like to setup the SOLR in SOLRCloud mode with external zookeepers.

Could someone please advise on this?



NOTICE TO RECIPIENTS: This communication may contain confidential and/or 
privileged information. If you are not the intended recipient (or have received 
this communication in error) please notify the sender and 
it-supp...@shoregrp.com immediately, and destroy this communication. Any 
unauthorized copying, disclosure or distribution of the material in this 
communication is strictly forbidden. Any views or opinions presented in this 
email are solely those of the author and do not necessarily represent those of 
the company. Finally, the recipient should check this email and any attachments 
for the presence of viruses. The company accepts no liability for any damage 
caused by any virus transmitted by this email.




JNDI settings

2016-09-12 Thread Aristedes Maniatis
I am using Solr 5.5 and wanting to add JNDI settings to Solr (for data import). 
I'm new to Solr Cloud setup (previously I was running Solr running as a custom 
bundled war) so I can't figure where to put the JNDI settings with user/pass 
themselves.

I don't want to add it to jetty.xml because that's part of the packaged 
application which will be upgraded from time to time.

Should it go into solr.xml inside the solr.home directory? If so, what's the 
right syntax there?


Ari


-- 
-->
Aristedes Maniatis
GPG fingerprint CBFB 84B4 738D 4E87 5E5C  5EFA EF6A 7D2E 3E49 102A


Re: Monitoring Apache Solr

2016-09-12 Thread Bram Van Dam
> I try to monitor apache solr, because solr often over heap and status
> collection solr be "down". How to monitor apache solr ??
> is there any tools for monitoring solr or how ??

The easiest way is to use the Solr ping feature:
https://cwiki.apache.org/confluence/display/solr/Ping

It will quickly and reliable tell you if Solr is still alive.

There is also a status call: /solr/admin/info/system?wt=json which can
tell you how much free memory you have left.

 - Bram



Re: How to swap two cores and then unload one of them

2016-09-12 Thread Georg Sorst
Hi Fabrizio,

I guess the correct way to add your modified / extended CoreAdminHandler
would be to add it as a  in solrconfig.xml

Best,
Georg

Fabrizio Fortino  schrieb am Mo., 12. Sep. 2016 11:24:

> Hi George,
>
> Thank you for getting back to me.
>
> I am using Solr 6.
>
> I need to use coreContainer because I have created a CoreAdminHandler
> extension.
>
> Thanks,
> Fabrizio
>
> On Sun, Sep 11, 2016 at 6:42 PM, Georg Sorst 
> wrote:
>
> > Hi Fabrizio,
> >
> > which Solr version are you using? In more recent versions (starting with
> 5
> > I think) you should not use the coreContainer directly but instead go
> > through the HTTP API (which also supports the swap operation) or use
> SolrJ.
> >
> > Best,
> > Georg
> >
> > Fabrizio Fortino  schrieb am Mo., 29. Aug. 2016
> 11:53:
> >
> > > I have a NON-Cloud Solr and I am trying to use the swap functionality
> to
> > > push an updated core into production without downtime.
> > >
> > > Here are the steps I am executing
> > > 1. Solr is up and running with a single core (name = 'livecore')
> > > 2. I create a new core with the latest version of my documents (name =
> > > 'newcore')
> > > 3. I swap the cores -> coreContainer.swap("newcore", "livecore")
> > > 4. I try to unload "newcore" (that points to the old one) and remove
> all
> > > the related dirs -> coreContainer.unload("newcore", true, true, true)
> > >
> > > The first three operations are OK. But when I try to execute the last
> one
> > > the Solr log starts printing the following messages forever
> > >
> > > 61424 INFO (pool-1-thread-1) [ x:newcore] o.a.s.c.SolrCore Core newcore
> > is
> > > not yet closed, waiting 100 ms before checking again.
> > >
> > > I have opened an issue on this problem (
> > > https://issues.apache.org/jira/browse/SOLR-8757) but I have not
> received
> > > any answer yet.
> > >
> > > In the meantime I have found the following workaround: I try to
> manually
> > > close all the core references before unloading it. Here is the code:
> > >
> > > SolrCore core = coreContainer.create("newcore", coreProps)
> > > coreContainer.swap("newcore", "livecore")
> > > // the old livecore is now newcore, so unload it and remove all the
> > > related dirsSolrCore oldCore = coreContainer.getCore("newCore")while
> > > (oldCore.getOpenCount > 1) {
> > >   oldCore.close()
> > > }
> > > coreContainer.unload("newcore", true, true, true)
> > >
> > >
> > > This seemed to work but there is some race conditions and from time to
> > time
> > > I get a ConcurrentModificationException and then an abnormal CPU
> > > consumption.
> > >
> > > I filed a separate issue on this
> > > https://issues.apache.org/jira/browse/SOLR-9208 but this is not
> > considered
> > > an issue by the Solr committers. The suggestion is to move and discuss
> it
> > > here in the mailing list.
> > >
> > > If this is not an issue, what are the steps to swap to cores and unload
> > one
> > > of them?
> > >
> > > Thanks a lot,
> > > Fabrizio
> > >
> >
>


Re: Unable to connect to correct port in solr 6.2.0

2016-09-12 Thread Shalin Shekhar Mangar
Which version of red hat? Is lsof installed on this system?

On Mon, Sep 12, 2016 at 4:30 PM, Preeti Bhat 
wrote:

> HI All,
>
> I am trying to setup the solr in Redhat Linux, using the
> install_solr_service.sh script of solr.6.2.0  tgz. The script runs and
> starts the solr on port 8983 even when the port is specifically specified
> as 2016.
>
> /root/install_solr_service.sh solr-6.2.0.tgz -i /opt -d /var/solr -u root
> -s solr -p 2016
>
> Is this correct way to setup solr in linux? Also, I have observed that if
> I go to the /bin/solr and start with the port number its working as
> expected but not as service.
>
> I would like to setup the SOLR in SOLRCloud mode with external zookeepers.
>
> Could someone please advise on this?
>
>
>
> NOTICE TO RECIPIENTS: This communication may contain confidential and/or
> privileged information. If you are not the intended recipient (or have
> received this communication in error) please notify the sender and
> it-supp...@shoregrp.com immediately, and destroy this communication. Any
> unauthorized copying, disclosure or distribution of the material in this
> communication is strictly forbidden. Any views or opinions presented in
> this email are solely those of the author and do not necessarily represent
> those of the company. Finally, the recipient should check this email and
> any attachments for the presence of viruses. The company accepts no
> liability for any damage caused by any virus transmitted by this email.
>
>
>


-- 
Regards,
Shalin Shekhar Mangar.


Re: Unable to connect to correct port in solr 6.2.0

2016-09-12 Thread Shalin Shekhar Mangar
I just tried this out on ubuntu (sorry I don't have access to a red hat
system) and it works fine.

One thing that you have to take care of is that if you install the service
on the default 8983 port then, trying to upgrade with the same tar to a
different port does not work. So please ensure that you hadn't already
installed the service before already.

On Tue, Sep 13, 2016 at 12:53 AM, Shalin Shekhar Mangar <
shalinman...@gmail.com> wrote:

> Which version of red hat? Is lsof installed on this system?
>
> On Mon, Sep 12, 2016 at 4:30 PM, Preeti Bhat 
> wrote:
>
>> HI All,
>>
>> I am trying to setup the solr in Redhat Linux, using the
>> install_solr_service.sh script of solr.6.2.0  tgz. The script runs and
>> starts the solr on port 8983 even when the port is specifically specified
>> as 2016.
>>
>> /root/install_solr_service.sh solr-6.2.0.tgz -i /opt -d /var/solr -u root
>> -s solr -p 2016
>>
>> Is this correct way to setup solr in linux? Also, I have observed that if
>> I go to the /bin/solr and start with the port number its working as
>> expected but not as service.
>>
>> I would like to setup the SOLR in SOLRCloud mode with external zookeepers.
>>
>> Could someone please advise on this?
>>
>>
>>
>> NOTICE TO RECIPIENTS: This communication may contain confidential and/or
>> privileged information. If you are not the intended recipient (or have
>> received this communication in error) please notify the sender and
>> it-supp...@shoregrp.com immediately, and destroy this communication. Any
>> unauthorized copying, disclosure or distribution of the material in this
>> communication is strictly forbidden. Any views or opinions presented in
>> this email are solely those of the author and do not necessarily represent
>> those of the company. Finally, the recipient should check this email and
>> any attachments for the presence of viruses. The company accepts no
>> liability for any damage caused by any virus transmitted by this email.
>>
>>
>>
>
>
> --
> Regards,
> Shalin Shekhar Mangar.
>



-- 
Regards,
Shalin Shekhar Mangar.


Re: Monitoring Apache Solr

2016-09-12 Thread Jan Høydahl
I’ve heard several people recommend Sensu lately https://sensuapp.org/ 
 but i have not tested it yet.
They seem to have a Solr plugin out of the box as well 
https://github.com/sensu-plugins/sensu-plugins-solr 


--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 30. aug. 2016 kl. 11.59 skrev Hardika Catur S 
> :
> 
> Hi,
> 
> I try to monitor apache solr, because solr often over heap and status 
> collection solr be "down". How to monitor apache solr ??
> is there any tools for monitoring solr or how ??
> 
> Please help me to find a solution.
> 
> Thanks,
> Hardika CS.



Re: ConcurrentUpdateSolrClient threads

2016-09-12 Thread Rallavagu

Any takers?

On 9/9/16 9:03 AM, Rallavagu wrote:

All,

Running Solr 5.4.1 with embedded Jetty with frequent updates coming in
and softCommit is set to 10 min. What I am noticing is occasional "slow"
updates (takes 8 sec to 15 sec sometimes) and about the same time slow
QTimes. Upon investigating, it appears that
"ConcurrentUpdateSolrClient:blockUntilFinished:429" is waiting on thread
to be free. Looking at https://issues.apache.org/jira/browse/SOLR-8500
it appears that it presents with an option to increase the number of
threads that might help with managing more updates without having to
wait (though need to update Solr to 5.5). I could not figure out the
default number of threads for ConcurrentUpdateSolrClient class. Before I
can try increasing number of threads, wondering if there are any
"gotchas" increasing the number of threads and what is the reasonable
number of the threads if so?


org.apache.solr.update.SolrCmdDistributor:finish:90 (method time = 0 ms,
total time = 7489 ms)
 org.apache.solr.update.SolrCmdDistributor:blockAndDoRetries:232 (method
time = 0 ms, total time = 7489 ms)
  org.apache.solr.update.StreamingSolrClients:blockUntilFinished:107
(method time = 0 ms, total time = 7489 ms)

org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient:blockUntilFinished:429
(method time = 0 ms, total time = 7489 ms)
java.lang.Object:wait (method time = 7489 ms, total time = 7489 ms)


Thanks in advance


Re: Unable to connect to correct port in solr 6.2.0

2016-09-12 Thread Jan Høydahl
I tried it on a Docker RHEL system (gidikern/rhel-oracle-jre) and the install 
failed with errors

./install_solr_service.sh: line 322: update-rc.d: command not found
./install_solr_service.sh: line 326: service: command not found
./install_solr_service.sh: line 328: service: command not found

Turns out that /proc/version returns “Ubuntu” this on the system:
Linux version 4.4.19-moby (root@3934ed318998) (gcc version 5.4.0 20160609 
(Ubuntu 5.4.0-6ubuntu1~16.04.2) ) #1 SMP Thu Sep 1 09:44:30 UTC 2016
There is also a /etc/redhat-release file:
Red Hat Enterprise Linux Server release 7.1 (Maipo)

So the install of rc.d failed completely because of this. Don’t know if this is 
common on RHEL systems, perhaps we need to improve distro detection in 
installer?

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 12. sep. 2016 kl. 21.31 skrev Shalin Shekhar Mangar :
> 
> I just tried this out on ubuntu (sorry I don't have access to a red hat
> system) and it works fine.
> 
> One thing that you have to take care of is that if you install the service
> on the default 8983 port then, trying to upgrade with the same tar to a
> different port does not work. So please ensure that you hadn't already
> installed the service before already.
> 
> On Tue, Sep 13, 2016 at 12:53 AM, Shalin Shekhar Mangar <
> shalinman...@gmail.com> wrote:
> 
>> Which version of red hat? Is lsof installed on this system?
>> 
>> On Mon, Sep 12, 2016 at 4:30 PM, Preeti Bhat 
>> wrote:
>> 
>>> HI All,
>>> 
>>> I am trying to setup the solr in Redhat Linux, using the
>>> install_solr_service.sh script of solr.6.2.0  tgz. The script runs and
>>> starts the solr on port 8983 even when the port is specifically specified
>>> as 2016.
>>> 
>>> /root/install_solr_service.sh solr-6.2.0.tgz -i /opt -d /var/solr -u root
>>> -s solr -p 2016
>>> 
>>> Is this correct way to setup solr in linux? Also, I have observed that if
>>> I go to the /bin/solr and start with the port number its working as
>>> expected but not as service.
>>> 
>>> I would like to setup the SOLR in SOLRCloud mode with external zookeepers.
>>> 
>>> Could someone please advise on this?
>>> 
>>> 
>>> 
>>> NOTICE TO RECIPIENTS: This communication may contain confidential and/or
>>> privileged information. If you are not the intended recipient (or have
>>> received this communication in error) please notify the sender and
>>> it-supp...@shoregrp.com immediately, and destroy this communication. Any
>>> unauthorized copying, disclosure or distribution of the material in this
>>> communication is strictly forbidden. Any views or opinions presented in
>>> this email are solely those of the author and do not necessarily represent
>>> those of the company. Finally, the recipient should check this email and
>>> any attachments for the presence of viruses. The company accepts no
>>> liability for any damage caused by any virus transmitted by this email.
>>> 
>>> 
>>> 
>> 
>> 
>> --
>> Regards,
>> Shalin Shekhar Mangar.
>> 
> 
> 
> 
> -- 
> Regards,
> Shalin Shekhar Mangar.



Re: Unable to connect to correct port in solr 6.2.0

2016-09-12 Thread Kevin Risden
Jan - the issue you are hitting is Docker and /proc/version is getting the
underlying OS kernel and not what you would expect from the Docker
container. The errors for update-rc.d and service are because the docker
image you are using is trimmed down.

Kevin Risden

On Mon, Sep 12, 2016 at 3:19 PM, Jan Høydahl  wrote:

> I tried it on a Docker RHEL system (gidikern/rhel-oracle-jre) and the
> install failed with errors
>
> ./install_solr_service.sh: line 322: update-rc.d: command not found
> ./install_solr_service.sh: line 326: service: command not found
> ./install_solr_service.sh: line 328: service: command not found
>
> Turns out that /proc/version returns “Ubuntu” this on the system:
> Linux version 4.4.19-moby (root@3934ed318998) (gcc version 5.4.0 20160609
> (Ubuntu 5.4.0-6ubuntu1~16.04.2) ) #1 SMP Thu Sep 1 09:44:30 UTC 2016
> There is also a /etc/redhat-release file:
> Red Hat Enterprise Linux Server release 7.1 (Maipo)
>
> So the install of rc.d failed completely because of this. Don’t know if
> this is common on RHEL systems, perhaps we need to improve distro detection
> in installer?
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
> > 12. sep. 2016 kl. 21.31 skrev Shalin Shekhar Mangar <
> shalinman...@gmail.com>:
> >
> > I just tried this out on ubuntu (sorry I don't have access to a red hat
> > system) and it works fine.
> >
> > One thing that you have to take care of is that if you install the
> service
> > on the default 8983 port then, trying to upgrade with the same tar to a
> > different port does not work. So please ensure that you hadn't already
> > installed the service before already.
> >
> > On Tue, Sep 13, 2016 at 12:53 AM, Shalin Shekhar Mangar <
> > shalinman...@gmail.com> wrote:
> >
> >> Which version of red hat? Is lsof installed on this system?
> >>
> >> On Mon, Sep 12, 2016 at 4:30 PM, Preeti Bhat 
> >> wrote:
> >>
> >>> HI All,
> >>>
> >>> I am trying to setup the solr in Redhat Linux, using the
> >>> install_solr_service.sh script of solr.6.2.0  tgz. The script runs and
> >>> starts the solr on port 8983 even when the port is specifically
> specified
> >>> as 2016.
> >>>
> >>> /root/install_solr_service.sh solr-6.2.0.tgz -i /opt -d /var/solr -u
> root
> >>> -s solr -p 2016
> >>>
> >>> Is this correct way to setup solr in linux? Also, I have observed that
> if
> >>> I go to the /bin/solr and start with the port number its working as
> >>> expected but not as service.
> >>>
> >>> I would like to setup the SOLR in SOLRCloud mode with external
> zookeepers.
> >>>
> >>> Could someone please advise on this?
> >>>
> >>>
> >>>
> >>> NOTICE TO RECIPIENTS: This communication may contain confidential
> and/or
> >>> privileged information. If you are not the intended recipient (or have
> >>> received this communication in error) please notify the sender and
> >>> it-supp...@shoregrp.com immediately, and destroy this communication.
> Any
> >>> unauthorized copying, disclosure or distribution of the material in
> this
> >>> communication is strictly forbidden. Any views or opinions presented in
> >>> this email are solely those of the author and do not necessarily
> represent
> >>> those of the company. Finally, the recipient should check this email
> and
> >>> any attachments for the presence of viruses. The company accepts no
> >>> liability for any damage caused by any virus transmitted by this email.
> >>>
> >>>
> >>>
> >>
> >>
> >> --
> >> Regards,
> >> Shalin Shekhar Mangar.
> >>
> >
> >
> >
> > --
> > Regards,
> > Shalin Shekhar Mangar.
>
>


Re: How to enable JMX to monitor Jetty

2016-09-12 Thread Rallavagu
I have modified modules/http.mod as following (for solr 5.4.1, Jetty 9). 
As you can see I have referred jetty-jmx.xml.


#
# Jetty HTTP Connector
#

[depend]
server

[xml]
etc/jetty-http.xml
etc/jetty-jmx.xml



On 5/21/16 3:59 AM, Georg Sorst wrote:

Hi list,

how do I correctly enable JMX in Solr 6 so that I can monitor Jetty's
thread pool?

The first step is to set ENABLE_REMOTE_JMX_OPTS="true" in bin/solr.in.sh.
This will give me JMX access to JVM properties (garbage collection, class
loading etc.) and works fine. However, this will not give me any Jetty
specific properties.

I've tried manually adding jetty-jmx.xml from the jetty 9 distribution to
server/etc/ and then starting Solr with 'java ... start.jar
etc/jetty-jmx.xml'. This works fine and gives me access to the right
properties, but seems wrong. I could similarly copy the contents of
jetty-jmx.xml into jetty.xml but this is not much better either.

Is there a correct way for this?

Thanks!
Georg



Miserable Experience Using Solr. Again.

2016-09-12 Thread Aaron Greenspan
Hi,

I have been on this list for some time because I know that any time I try to do 
anything related to Solr I’m going to have to spend hours on it, wondering why 
everything has to be so awful, and I just want somewhere to provide feedback 
with the dim hope that the product might improve one day. (So far, for my 
purposes, it hasn’t.) Sure enough, I still absolutely hate using Solr, and I 
have more feedback.

I started with a confusing error on the web console, which I still can’t figure 
out how to password protect without going through an insanely process involving 
"ZooKeeper," which I don’t know anything about, or have, to the best of my 
knowledge:

Problem accessing /solr/. Reason:

Forbidden

According to logs, this apparently meant that a MySQL query had failed due to a 
field name change. Since I would have to change my XML configuration files, I 
decided to use the opportunity to upgrade from Solr 5.1.4 to 6.2.0. It broke 
everything.

First I was getting errors about "Unsupported major.minor version 52.0", so I 
needed to install the Linux x64 JRE 1.8.0, which I managed on CentOS 6 with...

yum install openjdk-1.8.0

...going to Oracle’s web site, downloading the latest JRE 1.8 build, and then 
running...

yum localinstall jre-8u101-linux-x64.rpm

So far so good. But I didn’t have JAVA_HOME set properly apparently, so I 
needed to do the not-exactly-intuitive…

export 
JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.101-3.b13.el6_8.x86_64/jre/

As usual, I manually moved over my mysql-connector-java-5.1.38-bin.jar file 
from the dist/ folder in the old version to the new one. Then after stopping 
the old process (with kill -9, since there seems to be no graceful way to shut 
down Solr—README.txt doesn’t mention bin/solr stop) I moved over my two core 
folders from the old server/solr/ folder. I tried to start it up with bin/solr 
start, and watched the errors roll in.

There was some kind of problem with StopFilterFactory and the text_general 
field type. Thanks to Stack Overflow I was able to determine that the apparent 
problem was that there was a parameter, previously fine, which was no longer 
fine. So I removed all instances of enablePositionIncrements="true". That 
helped, but then I ran into a broader error: "Plugin Initializing failure for 
[schema.xml] fieldType". It didn’t say which field type. Buried in the logs I 
found a reference in the Java stack trace—which *disappears* (and distorts the 
viewing window horribly) after a few seconds when you try to view it in the web 
log UI—to the string "units="degrees"". Sure enough, this string appeared in my 
schema.xml for a class called "solr.SpatialRecursivePrefixTreeFieldType" that 
I’m pretty sure I never use. I removed that parameter, and moved on to the next 
set of errors.

Apparently there is some aspect of the Thai text field type that Solr 6.2.0 
doesn’t like. So I disabled it. I don’t use Thai text.

Now Solr was complaining about "Error loading class 
'solr.admin.AdminHandlers'". So I found the reference to 
solr.admin.AdminHandlers in solrconfig.xml for each of my cores and commented 
it out. Only then did Solr work again.

This was not a smooth process. It took about two hours. The user interface is 
still as buggy as an early alpha of most products, the errors are difficult to 
understand when they don’t actually specify what’s wrong (and they almost never 
do), and there should have been an automatic process to highlight and fix 
problems in old (pre-6) configuration files. Never mind the fact that the 
XML-based configuration process is an antiquated nightmare when the rest of the 
world has long since moved onto databases.

Maybe this will help someone else out there.

Aaron

PlainSite | http://www.plainsite.org

Re: Miserable Experience Using Solr. Again.

2016-09-12 Thread John Bickerstaff
For what it's worth - I found enough frustration upgrading that I decided
to "upgrade by replacement"

Now, I suppose if you've got a huge dataset to re-index that could be a
problem, but just in case an option like that helps you, I'll suggest this.

1. Install 6.x on a new machine using the "install for production"
instructions
2. Use the configs from one of the sample projects to create an
appropriately-named collection
3. Use the ability to "include" your configs into the other configs (they
live in separate files)
  I can provide more help here if you're interested
4. Re-index all your data into the new version of SOLR...

I have rough, but useable docs on this if you are interested in attempting
this approach.

On Mon, Sep 12, 2016 at 3:48 PM, Aaron Greenspan <
aaron.greens...@plainsite.org> wrote:

> Hi,
>
> I have been on this list for some time because I know that any time I try
> to do anything related to Solr I’m going to have to spend hours on it,
> wondering why everything has to be so awful, and I just want somewhere to
> provide feedback with the dim hope that the product might improve one day.
> (So far, for my purposes, it hasn’t.) Sure enough, I still absolutely hate
> using Solr, and I have more feedback.
>
> I started with a confusing error on the web console, which I still can’t
> figure out how to password protect without going through an insanely
> process involving "ZooKeeper," which I don’t know anything about, or have,
> to the best of my knowledge:
>
> Problem accessing /solr/. Reason:
>
> Forbidden
>
> According to logs, this apparently meant that a MySQL query had failed due
> to a field name change. Since I would have to change my XML configuration
> files, I decided to use the opportunity to upgrade from Solr 5.1.4 to
> 6.2.0. It broke everything.
>
> First I was getting errors about "Unsupported major.minor version 52.0",
> so I needed to install the Linux x64 JRE 1.8.0, which I managed on CentOS 6
> with...
>
> yum install openjdk-1.8.0
>
> ...going to Oracle’s web site, downloading the latest JRE 1.8 build, and
> then running...
>
> yum localinstall jre-8u101-linux-x64.rpm
>
> So far so good. But I didn’t have JAVA_HOME set properly apparently, so I
> needed to do the not-exactly-intuitive…
>
> export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.101-3.b13.
> el6_8.x86_64/jre/
>
> As usual, I manually moved over my mysql-connector-java-5.1.38-bin.jar
> file from the dist/ folder in the old version to the new one. Then after
> stopping the old process (with kill -9, since there seems to be no graceful
> way to shut down Solr—README.txt doesn’t mention bin/solr stop) I moved
> over my two core folders from the old server/solr/ folder. I tried to start
> it up with bin/solr start, and watched the errors roll in.
>
> There was some kind of problem with StopFilterFactory and the text_general
> field type. Thanks to Stack Overflow I was able to determine that the
> apparent problem was that there was a parameter, previously fine, which was
> no longer fine. So I removed all instances of enablePositionIncrements="true".
> That helped, but then I ran into a broader error: "Plugin Initializing
> failure for [schema.xml] fieldType". It didn’t say which field type. Buried
> in the logs I found a reference in the Java stack trace—which *disappears*
> (and distorts the viewing window horribly) after a few seconds when you try
> to view it in the web log UI—to the string "units="degrees"". Sure enough,
> this string appeared in my schema.xml for a class called "solr.
> SpatialRecursivePrefixTreeFieldType" that I’m pretty sure I never use. I
> removed that parameter, and moved on to the next set of errors.
>
> Apparently there is some aspect of the Thai text field type that Solr
> 6.2.0 doesn’t like. So I disabled it. I don’t use Thai text.
>
> Now Solr was complaining about "Error loading class
> 'solr.admin.AdminHandlers'". So I found the reference to
> solr.admin.AdminHandlers in solrconfig.xml for each of my cores and
> commented it out. Only then did Solr work again.
>
> This was not a smooth process. It took about two hours. The user interface
> is still as buggy as an early alpha of most products, the errors are
> difficult to understand when they don’t actually specify what’s wrong (and
> they almost never do), and there should have been an automatic process to
> highlight and fix problems in old (pre-6) configuration files. Never mind
> the fact that the XML-based configuration process is an antiquated
> nightmare when the rest of the world has long since moved onto databases.
>
> Maybe this will help someone else out there.
>
> Aaron
>
> PlainSite | http://www.plainsite.org


Re: Miserable Experience Using Solr. Again.

2016-09-12 Thread John Bickerstaff
I would also add that dealing with Java versions has always been a pain
until you get used to the whole "JAVA HOME" thing, but that this isn't
anything to do with SOLR per se - it's just part and parcel of dealing with
open source software that uses Java...

Big changes between major versions of any software are common - there is
some documentation here that may help...

https://cwiki.apache.org/confluence/display/solr/Major+Changes+from+Solr+5+to+Solr+6

It was reading this that made me decide to try "upgrade by replace" to
avoid the whole "update" issue entirely - although I also had to upgrade
Java on my VMs...

On Mon, Sep 12, 2016 at 4:05 PM, John Bickerstaff 
wrote:

> For what it's worth - I found enough frustration upgrading that I decided
> to "upgrade by replacement"
>
> Now, I suppose if you've got a huge dataset to re-index that could be a
> problem, but just in case an option like that helps you, I'll suggest this.
>
> 1. Install 6.x on a new machine using the "install for production"
> instructions
> 2. Use the configs from one of the sample projects to create an
> appropriately-named collection
> 3. Use the ability to "include" your configs into the other configs (they
> live in separate files)
>   I can provide more help here if you're interested
> 4. Re-index all your data into the new version of SOLR...
>
> I have rough, but useable docs on this if you are interested in attempting
> this approach.
>
> On Mon, Sep 12, 2016 at 3:48 PM, Aaron Greenspan <
> aaron.greens...@plainsite.org> wrote:
>
>> Hi,
>>
>> I have been on this list for some time because I know that any time I try
>> to do anything related to Solr I’m going to have to spend hours on it,
>> wondering why everything has to be so awful, and I just want somewhere to
>> provide feedback with the dim hope that the product might improve one day.
>> (So far, for my purposes, it hasn’t.) Sure enough, I still absolutely hate
>> using Solr, and I have more feedback.
>>
>> I started with a confusing error on the web console, which I still can’t
>> figure out how to password protect without going through an insanely
>> process involving "ZooKeeper," which I don’t know anything about, or have,
>> to the best of my knowledge:
>>
>> Problem accessing /solr/. Reason:
>>
>> Forbidden
>>
>> According to logs, this apparently meant that a MySQL query had failed
>> due to a field name change. Since I would have to change my XML
>> configuration files, I decided to use the opportunity to upgrade from Solr
>> 5.1.4 to 6.2.0. It broke everything.
>>
>> First I was getting errors about "Unsupported major.minor version 52.0",
>> so I needed to install the Linux x64 JRE 1.8.0, which I managed on CentOS 6
>> with...
>>
>> yum install openjdk-1.8.0
>>
>> ...going to Oracle’s web site, downloading the latest JRE 1.8 build, and
>> then running...
>>
>> yum localinstall jre-8u101-linux-x64.rpm
>>
>> So far so good. But I didn’t have JAVA_HOME set properly apparently, so I
>> needed to do the not-exactly-intuitive…
>>
>> export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.101-3.b13.el
>> 6_8.x86_64/jre/
>>
>> As usual, I manually moved over my mysql-connector-java-5.1.38-bin.jar
>> file from the dist/ folder in the old version to the new one. Then after
>> stopping the old process (with kill -9, since there seems to be no graceful
>> way to shut down Solr—README.txt doesn’t mention bin/solr stop) I moved
>> over my two core folders from the old server/solr/ folder. I tried to start
>> it up with bin/solr start, and watched the errors roll in.
>>
>> There was some kind of problem with StopFilterFactory and the
>> text_general field type. Thanks to Stack Overflow I was able to determine
>> that the apparent problem was that there was a parameter, previously fine,
>> which was no longer fine. So I removed all instances of
>> enablePositionIncrements="true". That helped, but then I ran into a
>> broader error: "Plugin Initializing failure for [schema.xml] fieldType". It
>> didn’t say which field type. Buried in the logs I found a reference in the
>> Java stack trace—which *disappears* (and distorts the viewing window
>> horribly) after a few seconds when you try to view it in the web log UI—to
>> the string "units="degrees"". Sure enough, this string appeared in my
>> schema.xml for a class called "solr.SpatialRecursivePrefixTreeFieldType"
>> that I’m pretty sure I never use. I removed that parameter, and moved on to
>> the next set of errors.
>>
>> Apparently there is some aspect of the Thai text field type that Solr
>> 6.2.0 doesn’t like. So I disabled it. I don’t use Thai text.
>>
>> Now Solr was complaining about "Error loading class
>> 'solr.admin.AdminHandlers'". So I found the reference to
>> solr.admin.AdminHandlers in solrconfig.xml for each of my cores and
>> commented it out. Only then did Solr work again.
>>
>> This was not a smooth process. It took about two hours. The user
>> interface is still as buggy as an early alpha of most pro

Re: Miserable Experience ..... Again.

2016-09-12 Thread Erik Hatcher
Aaron - I for one sympathize.  When I pause to think of the stacks upon stacks 
of technologies that something like Solr are built upon… my head spins and I 
feel for the folks coming to computer science these days and having the whole 
Java and Big Data stacks and all that goes along with that (JVM/mem/GC up to 
network topology and architecture with 3xZK, plus NxM Solr’s, and beyond to 
data modeling, schema design, and query parameter adjusting).

---

It’s good for us to hear the ugly/painful side of folks experiences.  It’s 
driven us to to where I find myself iterating with Solr in my day job like 
this….

   $ bin/solr create -c my_collection
   $ bin/post -c my_collection /data/docs.json

and http://… /select?q=…&wt=csv…

So “it works for me”, but that’s not a nice way to approach the struggles of 
users.   Though we’ve come a long way, we’ve got a ways to go as well.

Erik

p.s. - 

> Never mind the fact that the XML-based configuration process is an antiquated 
> nightmare when the rest of the world has long since moved onto databases.

Well, to that point - the world that I work in really boils down to at least 
plain text (alas, mostly JSON these days, but even that’s an implementation 
detail) stuffed into git repositories, and played into new Solr environments by 
uploading configuration files, or more modernly, hitting the Solr configuration 
API’s to add/configure fields, set up request handlers, and the basics of what 
needs to be done.  No XML needed these days.   No (relational, JDBC) databases 
either, for that matter :)

> Maybe this will help someone else out there.

Thanks for taking the time to detail your struggles to the community.  It is 
helpful to see where the rough edges are in this whole business, and smoothing 
them out.   But it’s no easy business, having these stacks of dependencies and 
complexities piled on top of one another and trying to get it all fired up 
properly and usably.

Erik



Re: Miserable Experience ..... Again.

2016-09-12 Thread Joel Bernstein
I'm currently working on upgrading Alfresco from Solr 6.0 to Solr 6.2.
Should be easy. Think again. Lucene analyzer changes between Solr 6.0 and
Solr 6.2 and a new assert in ConjunctionDISI have caused days of work to
perform this simple upgrade.

Joel Bernstein
http://joelsolr.blogspot.com/

On Mon, Sep 12, 2016 at 7:05 PM, Erik Hatcher 
wrote:

> Aaron - I for one sympathize.  When I pause to think of the stacks upon
> stacks of technologies that something like Solr are built upon… my head
> spins and I feel for the folks coming to computer science these days and
> having the whole Java and Big Data stacks and all that goes along with that
> (JVM/mem/GC up to network topology and architecture with 3xZK, plus NxM
> Solr’s, and beyond to data modeling, schema design, and query parameter
> adjusting).
>
> ---
>
> It’s good for us to hear the ugly/painful side of folks experiences.  It’s
> driven us to to where I find myself iterating with Solr in my day job like
> this….
>
>$ bin/solr create -c my_collection
>$ bin/post -c my_collection /data/docs.json
>
> and http://… /select?q=…&wt=csv…
>
> So “it works for me”, but that’s not a nice way to approach the struggles
> of users.   Though we’ve come a long way, we’ve got a ways to go as well.
>
> Erik
>
> p.s. -
>
> > Never mind the fact that the XML-based configuration process is an
> antiquated nightmare when the rest of the world has long since moved onto
> databases.
>
> Well, to that point - the world that I work in really boils down to at
> least plain text (alas, mostly JSON these days, but even that’s an
> implementation detail) stuffed into git repositories, and played into new
> Solr environments by uploading configuration files, or more modernly,
> hitting the Solr configuration API’s to add/configure fields, set up
> request handlers, and the basics of what needs to be done.  No XML needed
> these days.   No (relational, JDBC) databases either, for that matter :)
>
> > Maybe this will help someone else out there.
>
> Thanks for taking the time to detail your struggles to the community.  It
> is helpful to see where the rough edges are in this whole business, and
> smoothing them out.   But it’s no easy business, having these stacks of
> dependencies and complexities piled on top of one another and trying to get
> it all fired up properly and usably.
>
> Erik
>
>


Re: Miserable Experience ..... Again.

2016-09-12 Thread Bradley Belyeu
I agree with what Erik wrote and some of Aaron’s original post. I’m relatively 
new to the Solr system (yes, pun intended) having just started diving into it a 
little over a year ago. I was in the “envious” position of being the only 
person who wanted to learn it and support it after our previous “expert” left 
the team. :D

It recently took me a full two week sprint cycle to upgrade our old style 
master-slaves cluster from 4.3 to 5.1 (we simultaneously went from java 1.7 to 
1.8 and updated our NewRelic wrapper). There were many pain points along the 
way especially since I’m not much of a Java developer (I’m a Python/PHP/JS 
guy). BUT that being said I learned such an incredible amount over that sprint 
about the JVM, Solr configs & classes, caching & index tuning, and Lucene 
itself that I wouldn’t have changed it a bit. Ok, that’s a lie, I would have 
fixed a problem with NGrams before it caused a couple hour production outage. 
But basically, the amount of knowledge gained was very well worth the time and 
effort put into working through the upgrade pain points.

@Aaron another thing I’ve learned over the years is to only change 1 thing at a 
time, and definitely don’t mix troubleshooting with upgrading.
https://en.wikibooks.org/wiki/Computer_Programming_Principles/Maintaining/Debugging#Change_one_thing_at_a_time

One thing I don’t understand from your email is when you said you were 
“wondering why everything has to be so awful” about Solr. The initial problems 
you described were related to a MySQL database change and issues with 
Zookeeper. I think choosing to do the Java updates and Solr updates are what 
led to the other issues. The Solr docs packaged with the release do a fairly 
good job of explaining the breaking changes with each version. I chose to do 
all my updates one minor version at a time so I could keep up with the changes 
in the docs (4.3->4.4->…4.10->5.0->5.1) which took longer but made 
troubleshooting much easier.

As far as the user interface goes, at least it has one unlike some other 
popular search tools (cough ElasticSearch cough). And as far as XML based 
configuration is concerned, I personally prefer it to scripts with JSON blobs 
and Rest calls to set up your collections/entities/docs/etc. But that’s just 
b/c I’m weird.

I do agree that the error messages are often not helpful, BUT I found it 
easiest to just look at the source code to find the root cause of the issue. 
Which is what I think this boils down to, is that rather than complain, we 
should work to make Solr what we want it to be. If you can identify an issue, 
then you can either solve it yourself with a pull request or create a Jira 
issue asking for help to get it fixed.
Here’s a quote I love from Maya Angelou, “What you're supposed to do when you 
don't like a thing is change it. If you can't change it, change the way you 
think about it. Don't complain.”


On 9/12/16, 6:05 PM, "Erik Hatcher"  wrote:

Aaron - I for one sympathize.  When I pause to think of the stacks upon 
stacks of technologies that something like Solr are built upon… my head spins 
and I feel for the folks coming to computer science these days and having the 
whole Java and Big Data stacks and all that goes along with that (JVM/mem/GC up 
to network topology and architecture with 3xZK, plus NxM Solr’s, and beyond to 
data modeling, schema design, and query parameter adjusting).

---

It’s good for us to hear the ugly/painful side of folks experiences.  It’s 
driven us to to where I find myself iterating with Solr in my day job like 
this….

   $ bin/solr create -c my_collection
   $ bin/post -c my_collection /data/docs.json

and http://… /select?q=…&wt=csv…

So “it works for me”, but that’s not a nice way to approach the struggles 
of users.   Though we’ve come a long way, we’ve got a ways to go as well.

Erik

p.s. - 

> Never mind the fact that the XML-based configuration process is an 
antiquated nightmare when the rest of the world has long since moved onto 
databases.

Well, to that point - the world that I work in really boils down to at 
least plain text (alas, mostly JSON these days, but even that’s an 
implementation detail) stuffed into git repositories, and played into new Solr 
environments by uploading configuration files, or more modernly, hitting the 
Solr configuration API’s to add/configure fields, set up request handlers, and 
the basics of what needs to be done.  No XML needed these days.   No 
(relational, JDBC) databases either, for that matter :)

> Maybe this will help someone else out there.

Thanks for taking the time to detail your struggles to the community.  It 
is helpful to see where the rough edges are in this whole business, and 
smoothing them out.   But it’s no easy business, having these stacks of 
dependencies and complexities piled on top of one another and trying to get it 
all fired up properly an

Re: Miserable Experience Using Solr. Again.

2016-09-12 Thread billnbell
Interested for sure

Bill Bell
Sent from mobile


> On Sep 12, 2016, at 4:05 PM, John Bickerstaff  
> wrote:
> 
> For what it's worth - I found enough frustration upgrading that I decided
> to "upgrade by replacement"
> 
> Now, I suppose if you've got a huge dataset to re-index that could be a
> problem, but just in case an option like that helps you, I'll suggest this.
> 
> 1. Install 6.x on a new machine using the "install for production"
> instructions
> 2. Use the configs from one of the sample projects to create an
> appropriately-named collection
> 3. Use the ability to "include" your configs into the other configs (they
> live in separate files)
>  I can provide more help here if you're interested
> 4. Re-index all your data into the new version of SOLR...
> 
> I have rough, but useable docs on this if you are interested in attempting
> this approach.
> 
> On Mon, Sep 12, 2016 at 3:48 PM, Aaron Greenspan <
> aaron.greens...@plainsite.org> wrote:
> 
>> Hi,
>> 
>> I have been on this list for some time because I know that any time I try
>> to do anything related to Solr I’m going to have to spend hours on it,
>> wondering why everything has to be so awful, and I just want somewhere to
>> provide feedback with the dim hope that the product might improve one day.
>> (So far, for my purposes, it hasn’t.) Sure enough, I still absolutely hate
>> using Solr, and I have more feedback.
>> 
>> I started with a confusing error on the web console, which I still can’t
>> figure out how to password protect without going through an insanely
>> process involving "ZooKeeper," which I don’t know anything about, or have,
>> to the best of my knowledge:
>> 
>> Problem accessing /solr/. Reason:
>> 
>>Forbidden
>> 
>> According to logs, this apparently meant that a MySQL query had failed due
>> to a field name change. Since I would have to change my XML configuration
>> files, I decided to use the opportunity to upgrade from Solr 5.1.4 to
>> 6.2.0. It broke everything.
>> 
>> First I was getting errors about "Unsupported major.minor version 52.0",
>> so I needed to install the Linux x64 JRE 1.8.0, which I managed on CentOS 6
>> with...
>> 
>> yum install openjdk-1.8.0
>> 
>> ...going to Oracle’s web site, downloading the latest JRE 1.8 build, and
>> then running...
>> 
>> yum localinstall jre-8u101-linux-x64.rpm
>> 
>> So far so good. But I didn’t have JAVA_HOME set properly apparently, so I
>> needed to do the not-exactly-intuitive…
>> 
>> export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.101-3.b13.
>> el6_8.x86_64/jre/
>> 
>> As usual, I manually moved over my mysql-connector-java-5.1.38-bin.jar
>> file from the dist/ folder in the old version to the new one. Then after
>> stopping the old process (with kill -9, since there seems to be no graceful
>> way to shut down Solr—README.txt doesn’t mention bin/solr stop) I moved
>> over my two core folders from the old server/solr/ folder. I tried to start
>> it up with bin/solr start, and watched the errors roll in.
>> 
>> There was some kind of problem with StopFilterFactory and the text_general
>> field type. Thanks to Stack Overflow I was able to determine that the
>> apparent problem was that there was a parameter, previously fine, which was
>> no longer fine. So I removed all instances of 
>> enablePositionIncrements="true".
>> That helped, but then I ran into a broader error: "Plugin Initializing
>> failure for [schema.xml] fieldType". It didn’t say which field type. Buried
>> in the logs I found a reference in the Java stack trace—which *disappears*
>> (and distorts the viewing window horribly) after a few seconds when you try
>> to view it in the web log UI—to the string "units="degrees"". Sure enough,
>> this string appeared in my schema.xml for a class called "solr.
>> SpatialRecursivePrefixTreeFieldType" that I’m pretty sure I never use. I
>> removed that parameter, and moved on to the next set of errors.
>> 
>> Apparently there is some aspect of the Thai text field type that Solr
>> 6.2.0 doesn’t like. So I disabled it. I don’t use Thai text.
>> 
>> Now Solr was complaining about "Error loading class
>> 'solr.admin.AdminHandlers'". So I found the reference to
>> solr.admin.AdminHandlers in solrconfig.xml for each of my cores and
>> commented it out. Only then did Solr work again.
>> 
>> This was not a smooth process. It took about two hours. The user interface
>> is still as buggy as an early alpha of most products, the errors are
>> difficult to understand when they don’t actually specify what’s wrong (and
>> they almost never do), and there should have been an automatic process to
>> highlight and fix problems in old (pre-6) configuration files. Never mind
>> the fact that the XML-based configuration process is an antiquated
>> nightmare when the rest of the world has long since moved onto databases.
>> 
>> Maybe this will help someone else out there.
>> 
>> Aaron
>> 
>> PlainSite | http://www.plainsite.org


Solr Cloud: Higher search latency with two nodes vs one node

2016-09-12 Thread Brent
I've been testing Solr Cloud 6.1.0 with two servers, and getting somewhat
disappointing query latency. I'm comparing the latency with the same tests,
running DSE in place of Solr Cloud. It's surprising, because running the
test just on my laptop (running a single instance of Solr), I get
significantly better latency with Solr than with DSE. 

Here's an overview of the test:
- Machine 1 - ZooKeeper server.
- Machine 2 - test driver, sending requests to the two test machines at a
rate of 200/sec total (so each test machine is processing 100/sec).
- Machine 3:
  - 1 Solr Cloud instance run with :2181 arg.
  - Java app - for each request received:
- Creates SolrQuery object from request data.
- Uses SolrClient.query(, ) to do a Solr search. 
- Creates SolrInputDocument and UpdateRequest objects, adds doc to
update request, and calls UpdateRequest.process(, ).
- Machine 4:
  - Duplicate of machine 3.

After seeing that Solr wasn't doing so well with two nodes compared to DSE,
but that it had been faster in single node tests on my laptop, I tried
running the test with just machine 3. So no Solr instance running on machine
4, and machine 2 is sending all 200 reqs/sec to machine 3. The latency in
this test was far better than a test with both machines using DSE, which was
better than a test with just one machine using DSE. This gives me hope.

To summarize the results, the latency I'm getting, in order from fastest to
slowest:
1) 1 node, Solr
2) 2 nodes, DSE
3) 1 node, DSE
4) 2 nodes Solr

In theory, shouldn't 2 nodes running Solr be the fastest? What could make
adding a second node cause the performance to decrease instead of increase
as I'd expect?

Relevant info:
I'm using a single Solr collection.
When running Solr with just one node, I create the collection with 1 shard. 
When running Solr with both nodes, I create the collection with 2 shards.
In both cases, I use replication factor = 1.

I'm using SolrJ 5.4.1, because it's the latest version of the library that
I've gotten working with both DSE and Solr Cloud, and, assuming I can get
Cloud to perform at least as well as DSE, I'll be needing to eventually talk
to both at once from within a single Java app. With DSE, I use
HttpSolrClient, giving it a URL of localhost so each Java app only talks to
the DSE instance running on the same machine. But with newer versions of
SolrJ, I'd get strange "String cannot be cast to " errors on
the server side when a string field value was of one or more seemingly
arbitrary lengths... I could literally add a blank space to the end of a
542-char length value and the error would go away... but that's niether here
nor there, just background on why I'm not using the latest SolrJ lib.

With Solr Cloud, I started out using CloudSolrClient for the SolrClient
objects, but have since switched to using HttpSolrClient to force each Java
app to talk directly to its local Solr instance. It produced a minor latency
improvement, but not significant. If I can get it so adding nodes actually
improves performance, I'll go back to CloudSolrClient because the
convenience of having built in support for handling failed nodes is pretty
great.

Here is the code to create the HttpSolrClient:
PoolingHttpClientConnectionManager connectionManager = new
PoolingHttpClientConnectionManager();
connectionManager.setDefaultMaxPerRoute(200);
connectionManager.setMaxTotal(5000);
RequestConfig reqConfig = RequestConfig.custom()
.setConnectTimeout(1000)
.setSocketTimeout(1000)
.build();
HttpClient httpClient = HttpClients.custom()
.setDefaultRequestConfig(reqConfig)
.setConnectionManager(connectionManager)
.build();
HttpSolrClient writeClient = new
HttpSolrClient("http://localhost:8983/solr";, httpClient);

Here is the code to create the HttpSolrClient:
PoolingHttpClientConnectionManager connectionManager = new
PoolingHttpClientConnectionManager();
connectionManager.setDefaultMaxPerRoute(200);
connectionManager.setMaxTotal(5000);
RequestConfig reqConfig = RequestConfig.custom()
.setConnectTimeout(1000)
.setSocketTimeout(1000)
.build();
HttpClient httpClient = HttpClients.custom()
.setDefaultRequestConfig(reqConfig)
.setConnectionManager(connectionManager)
.build();
CloudSolrClient writeClient = new CloudSolrClient("localhost:2181/solr",
httpClient);

I only have a single client instance in the Java app, shared amongst a
request handling threadpool, because I'm assuming it's threadsafe. Is that
correct? It's worked fine for DSE, so perhaps that's a dumb question.

The schema in both DSE and Solr tests is identical, and the solrconfig is as
close as I can get them given a small number of different settings
available.
Here's my complete solrconfig.xml for the Solr Cloud collection:


  6.1.0
  ${solr.data.dir:}
  
  
  
  
${solr.lock.type:native}
false
2048
true

  1
  0

 false
  
  
  

  ${solr.ulog.

Re: Miserable Experience Using Solr. Again.

2016-09-12 Thread John Bickerstaff
Sure - ping me off the list and I'll send my text file docs.

They're rough and (of course) focused on what I'm doing, but they just
might relieve some of the pain.

Caveat - all on Linux and command line - no Admin UI api's -- I like the
feel of the command line so I use it.

On Mon, Sep 12, 2016 at 8:41 PM,  wrote:

> Interested for sure
>
> Bill Bell
> Sent from mobile
>
>
> > On Sep 12, 2016, at 4:05 PM, John Bickerstaff 
> wrote:
> >
> > For what it's worth - I found enough frustration upgrading that I decided
> > to "upgrade by replacement"
> >
> > Now, I suppose if you've got a huge dataset to re-index that could be a
> > problem, but just in case an option like that helps you, I'll suggest
> this.
> >
> > 1. Install 6.x on a new machine using the "install for production"
> > instructions
> > 2. Use the configs from one of the sample projects to create an
> > appropriately-named collection
> > 3. Use the ability to "include" your configs into the other configs (they
> > live in separate files)
> >  I can provide more help here if you're interested
> > 4. Re-index all your data into the new version of SOLR...
> >
> > I have rough, but useable docs on this if you are interested in
> attempting
> > this approach.
> >
> > On Mon, Sep 12, 2016 at 3:48 PM, Aaron Greenspan <
> > aaron.greens...@plainsite.org> wrote:
> >
> >> Hi,
> >>
> >> I have been on this list for some time because I know that any time I
> try
> >> to do anything related to Solr I’m going to have to spend hours on it,
> >> wondering why everything has to be so awful, and I just want somewhere
> to
> >> provide feedback with the dim hope that the product might improve one
> day.
> >> (So far, for my purposes, it hasn’t.) Sure enough, I still absolutely
> hate
> >> using Solr, and I have more feedback.
> >>
> >> I started with a confusing error on the web console, which I still can’t
> >> figure out how to password protect without going through an insanely
> >> process involving "ZooKeeper," which I don’t know anything about, or
> have,
> >> to the best of my knowledge:
> >>
> >> Problem accessing /solr/. Reason:
> >>
> >>Forbidden
> >>
> >> According to logs, this apparently meant that a MySQL query had failed
> due
> >> to a field name change. Since I would have to change my XML
> configuration
> >> files, I decided to use the opportunity to upgrade from Solr 5.1.4 to
> >> 6.2.0. It broke everything.
> >>
> >> First I was getting errors about "Unsupported major.minor version 52.0",
> >> so I needed to install the Linux x64 JRE 1.8.0, which I managed on
> CentOS 6
> >> with...
> >>
> >> yum install openjdk-1.8.0
> >>
> >> ...going to Oracle’s web site, downloading the latest JRE 1.8 build, and
> >> then running...
> >>
> >> yum localinstall jre-8u101-linux-x64.rpm
> >>
> >> So far so good. But I didn’t have JAVA_HOME set properly apparently, so
> I
> >> needed to do the not-exactly-intuitive…
> >>
> >> export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.101-3.b13.
> >> el6_8.x86_64/jre/
> >>
> >> As usual, I manually moved over my mysql-connector-java-5.1.38-bin.jar
> >> file from the dist/ folder in the old version to the new one. Then after
> >> stopping the old process (with kill -9, since there seems to be no
> graceful
> >> way to shut down Solr—README.txt doesn’t mention bin/solr stop) I moved
> >> over my two core folders from the old server/solr/ folder. I tried to
> start
> >> it up with bin/solr start, and watched the errors roll in.
> >>
> >> There was some kind of problem with StopFilterFactory and the
> text_general
> >> field type. Thanks to Stack Overflow I was able to determine that the
> >> apparent problem was that there was a parameter, previously fine, which
> was
> >> no longer fine. So I removed all instances of enablePositionIncrements="
> true".
> >> That helped, but then I ran into a broader error: "Plugin Initializing
> >> failure for [schema.xml] fieldType". It didn’t say which field type.
> Buried
> >> in the logs I found a reference in the Java stack trace—which
> *disappears*
> >> (and distorts the viewing window horribly) after a few seconds when you
> try
> >> to view it in the web log UI—to the string "units="degrees"". Sure
> enough,
> >> this string appeared in my schema.xml for a class called "solr.
> >> SpatialRecursivePrefixTreeFieldType" that I’m pretty sure I never use.
> I
> >> removed that parameter, and moved on to the next set of errors.
> >>
> >> Apparently there is some aspect of the Thai text field type that Solr
> >> 6.2.0 doesn’t like. So I disabled it. I don’t use Thai text.
> >>
> >> Now Solr was complaining about "Error loading class
> >> 'solr.admin.AdminHandlers'". So I found the reference to
> >> solr.admin.AdminHandlers in solrconfig.xml for each of my cores and
> >> commented it out. Only then did Solr work again.
> >>
> >> This was not a smooth process. It took about two hours. The user
> interface
> >> is still as buggy as an early alpha of most products, the error