My prior use of SOLR in production was pre SOLR cloud. We put a
round-robin load balancer in front of replicas for searching.
Do I understand correctly that a load balancer is unnecessary with SOLR
Cloud? I. E. -- SOLR and Zookeeper will balance the load, regardless of
which replica's URL is ge
be distributed with a load balancer.
Queries do NOT go through Zookeeper.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Apr 17, 2016, at 9:35 PM, John Bickerstaff
wrote:
>
> My prior use of SOLR in production was pre SOLR cloud. We put a
>
search requests may not run on the Solr instance the load
balancer targeted - due to "a" above.
Corrections or refinements welcomed...
On Mon, Apr 18, 2016 at 7:21 AM, Shawn Heisey wrote:
> On 4/17/2016 10:35 PM, John Bickerstaff wrote:
> > My prior use of SOLR in production w
Excellent - thanks!
On Mon, Apr 18, 2016 at 9:16 AM, Erick Erickson
wrote:
> Your summary pretty much nails it.
>
> For (b) note that CloudSolrClient uses an internal software load
> balancer to distribute queries, FWIW.
>
>
>
> On Mon, Apr 18, 2016 at 7:52 AM, J
wrote:
> On Mon, Apr 18, 2016 at 3:52 PM, John Bickerstaff
> wrote:
> > Thanks all - very helpful.
> >
> > @Shawn - your reply implies that even if I'm hitting the URL for a single
> > endpoint via HTTP - the "balancing" will still occur across the S
p a random
> collection name that doesn't conflict, and create the thing, and smoke test
> with it. I know that standard practice is to bring up all new nodes, but
> I don't see why this is needed.
>
> -Original Message-
> From: John Bickerstaff [mail
ent changes some under significant indexing
> loads. The argument totally changes if you need low latency. It
> doesn't sound like your situation is sensitive to any of these
> though
>
> Best,
> Erick
>
> On Apr 18, 2016 10:41 AM, "John Bickerstaff"
> wrote:
ly avoids loading
production databases.
If you're interested, ping me -- I'm happy to share what I've got...
On Tue, Apr 19, 2016 at 2:08 AM, Charlie Hull wrote:
> On 18/04/2016 18:22, John Bickerstaff wrote:
>
>> So - my IT guy makes the case that we don't really n
, Apr 19, 2016 at 7:59 AM, Shawn Heisey wrote:
> On 4/18/2016 11:22 AM, John Bickerstaff wrote:
> > So - my IT guy makes the case that we don't really need Zookeeper / Solr
> > Cloud...
>
> > I'm biased in terms of using the most recent functionality,
I guess errors like "fsync-ing the write ahead log in SyncThread:5 took
7268ms which will adversely effect operation latency."
and: "likely client has closed socket"
make me wonder if something went wrong in terms of running out of disk
space for logs (thus giving your OS no space for necessary f
Which field do you try to atomically update? A or B or some other?
On Apr 21, 2016 8:29 PM, "Tirthankar Chatterjee"
wrote:
> Hi,
> Here is the scenario for SOLR5.5:
>
> FieldA type= stored=true indexed=true
>
> FieldB type= stored=false indexed=true docValue=true
> usedocvalueasstored=false
>
>
My default schema.xml does not have an entry for solr.StringField so I
can't tell you what that one does.
If you look for solr.StrField in the schema.xml file, you'll get some idea
of how it's defined. The default setting is for it not to be analyzed.
On Tue, May 3, 2016 at 10:16 AM, Steven Whit
ed fq=author:"Schild, Herbert" parameter.
On Tue, May 3, 2016 at 2:01 PM, Steven White wrote:
> Thanks John.
>
> Yes, the out-of-the-box schema.xml does not have solr.StringField.
> However, a number of Solr pages on the web mention solr.StringField [1] and
> thus I
You'll note that the "name" of the field in schema.xml is "string" and the
class is solr.StrField.
Easy to get confused when you're writing something up quickly... in a sense
the "string" field IS a solr.StrField
... but I could be wrong of course.
I think you should be able to change $SOLR_HOME to any valid path.
For example: /var/logs/solr_logs
On Tue, May 3, 2016 at 4:02 PM, Yunee Lee wrote:
> Hi, solr experts.
>
> I have a question for installing solr server.
> Using ' install_solr_service.sh' with option -d , the solr home directory
Hoss - I'm guessing this is all in the install script that gets created
when you run that command (can't remember it) on the tar.gz file...
In other words, Yunee can edit that file, find those variables (like
SOLR_SERVICE) and change them from what they're set to by default to
whatever he wants...
Max doc is the total amount of documents in the collection INCLUDING the
ones that have been deleted but not actually removed. Don't worry, deleted
docs are not used in search results.
Yes, you can change the number by "optimizing" (see the button) but this
does take time and bandwidth so use it
I'll just briefly add some thoughts...
#1 This can be done several ways - including keeping a totally separate
document that contains ONLY the data you're willing to expose for free --
but what you want to accomplish is not clear enough to me for me to start
making recommendations. I'll just say
- which is generally incompatible with a really
great user experience...
And, of course, I may have totally missed your meaning and you may have had
something totally different in mind...
On Thu, May 5, 2016 at 8:33 AM, John Bickerstaff
wrote:
> I'll just briefly add some thoughts...
>
d
new ones, nothing fancy. All fields are stored="true" and there's no
. I've tried versions 5.2.1 & 5.3.1 in standalone mode, with
the same outcome. It looks like a bug to me but I might have overlooked
something? This is my first attempt at atomic updates.
Thanks,
John.
_Sebago_ sebago shoes11.8701925463775
It's sent as the body of a POST request to
http://127.0.0.1:8080/solr/ato_test/update?wt=json&commit=true, with a
Content-Type: text/xml header. I still noted the consistent loss of
another document with the update above.
John
On 08/10/15 00:38, Up
ll; Closing out SolrRequest:
{wt=json&commit=true&update.chain=dedupe}
The update.chain parameter wasn't part of the original request, and
"dedupe" looks suspicious to me. Perhaps should I investigate further there?
Thanks,
John.
On 08/10/15 08:25, John Smith wrote:
>
Yes indeed, the update chain had been activated... I commented it out
again and the problem vanished.
Good job, thanks Erick and Upayavira!
John
On 08/10/15 08:58, Upayavira wrote:
> Look for the DedupUpdateProcessor in an update chain.
>
> that is there, but commented out II
logging) in
the data import handler. Is there an easy way to do this? Conceptually,
shouldn't the update chain be callable from the data import process -
maybe it is?
John
On 08/10/15 09:43, Upayavira wrote:
> Yay!
>
> On Thu, Oct 8, 2015, at 08:38 AM, John Smith wrote:
>> Yes ind
language. That's
probably what I'm gonna do anyway.
Thanks for your help!
John
On 08/10/15 13:39, Upayavira wrote:
> You can either specify the update chain via an update.chain request
> parameter, or you can configure a new request parameter with its own URL
> and separate
Hi Allessandro,
In the example I set the value to 1, but it's actually incremented in
the code, so with time it should go up. You're right though, I could use
an inc update instead.
John
On 08/10/15 16:45, Alessandro Benedetti wrote:
> Not related to the deletion problem, only a
The speed of particular query has gone from about 42 msec to 66 msec
without any changes.
How do I go about troubleshooting what may have happened? And how do I
improve that speed?
Thanks
There is a .NET app that is calling solr. I am measuring time span using
.NET provided methods. It used to take about 42 msec and it started taking
66 msec from the time to compose the call and query solr, get results and
parse them back. Interestingly today it was close to 44 msec.
I am testing us
Tomcat or Jetty is not supported any more.
Regards,
John Jenniskens
(fairly new to Solr)
n\solr.bat start from the task scheduler.
Is this the preferred method om Windows?
Regards,
John Jenniskens
(fairly new to Solr)
Option #2 is far better.
I found this: https://wiki.apache.org/solr/SolrSecurity#Document_Level_Security
but this solution requires that I use Manifold CF which I cannot. Does anyone
know how Manifold does it and can it be adopted to Solr?
Another idea I'm wandering about is what if I create
All,
With a cloud setup for a collection in 4.6.1, what is the most elegant way
to backup and restore an index?
We are specifically looking into the application of when doing a full
reindex, with the idea of building an index on one set of servers, backing
up the index, and then restoring that ba
ter, but the results I mention above are making me think that the first
scenario is actually the case.
Based on what I hear about the above, a follow up question may be what in
the world is wrong with my analyzer :)
Thanks for any thoughts!
Best,
John
't match
strawberry or was a different amount (.75pt, for instance). I know I'm no
expert, but I was thinking my analyzer was a bit better than that :p
--
*John Blythe*
Product Manager & Lead Developer
251.605.3071 | j...@curvolabs.com
www.curvolabs.com
58 Adams Ave
Evansville,
d of "cannulated" screws we see "cortical." I'm
convinced Solr is trolling me at this point :p
--
*John Blythe*
Product Manager & Lead Developer
251.605.3071 | j...@curvolabs.com
www.curvolabs.com
58 Adams Ave
Evansville, IN 47713
On Mon, May 18, 2015 at 2:34 PM, Doug
;cannula" instead of "cannulated" #facepalm
- i'll be GLAD to use that! i'd been trying to use http://explain.solr.pl/
previously but it kept error'ing out on me :\
thanks again, will report back!
--
*John Blythe*
Product Manager & Lead Developer
251.605.307
d terms created
lower relevancy due to IDF on the *joint *terms/token?
--
*John Blythe*
Product Manager & Lead Developer
251.605.3071 | j...@curvolabs.com
www.curvolabs.com
58 Adams Ave
Evansville, IN 47713
On Mon, May 18, 2015 at 4:57 PM, John Blythe wrote:
> Doug,
>
> A couple th
Awesome, following it now!
--
*John Blythe*
Product Manager & Lead Developer
251.605.3071 | j...@curvolabs.com
www.curvolabs.com
58 Adams Ave
Evansville, IN 47713
On Mon, May 18, 2015 at 8:21 PM, Doug Turnbull <
dturnb...@opensourceconnections.com> wrote:
> Glad you figured t
I think the omitNorms option will normalize your field length. try setting
that to false (it defaults to true for floats) and see if it helps
--
*John Blythe*
Product Manager & Lead Developer
251.605.3071 | j...@curvolabs.com
www.curvolabs.com
58 Adams Ave
Evansville, IN 47713
On Tue, Ma
Hi all,
I've been fine tuning our current Solr implementation the last week or two
to get more precise results. We are trying to get our implementation
accurate enough to serve as a lightweight machine learning (obviously a
misnomer) implementation of sorts. Actual user generated searching is far
could i do that the same way as my mention of using bq? the docs aren't
very rich in their example or explanation of boost= here:
https://cwiki.apache.org/confluence/display/solr/The+Extended+DisMax+Query+Parser
thanks!
--
*John Blythe*
Product Manager & Lead Developer
251.605
cool, will check into it some more via testing
--
*John Blythe*
Product Manager & Lead Developer
251.605.3071 | j...@curvolabs.com
www.curvolabs.com
58 Adams Ave
Evansville, IN 47713
On Wed, May 20, 2015 at 3:22 PM, Walter Underwood
wrote:
> I believe that boost is a superset of
new question re edismax: when i turn it on (in solr admin) my score goes
wayy down. from 772 to 4.9.
what in the edismax query parser would account for that huge nosedive?
--
*John Blythe*
Product Manager & Lead Developer
251.605.3071 | j...@curvolabs.com
www.curvolabs.com
58 Adams
soon? anyway, it ended up being a result of my query still being
in the primary query box instead of moving it to the q.alt box. i'd thought
the "alt" was indicative of it being an *alternate* query strictly
speaking. changed it to house the query and voila!
thanks-
--
*John
Good call thank you
On Wed, May 20, 2015 at 5:15 PM, Erick Erickson
wrote:
> John:
> The spam filter is very aggressive. Try changing the type to "plain
> text" rather than rich text or html...
> Best,
> Erick
> On Wed, May 20, 2015 at 2:35 PM, John Blythe wro
ross where people had similar issues resolved by doing so, but
it didn't help any.
i'm not getting any errors, what puzzle piece am i missing in my
configuration or query building?
thanks!
- john
I'm actually using for
querying). I guess it only takes non-copy (and maybe non-dynamic?) fields
into account?
Thanks for any more information on that field specific approach/issue!
--
*John Blythe*
Product Manager & Lead Developer
251.605.3071 | j...@curvolabs.com
www.curvolabs.com
5
Just checked my schema.xml and think that the issue is resulting from the
"stored" property being set false on descript2 and true on descript.
--
*John Blythe*
Product Manager & Lead Developer
251.605.3071 | j...@curvolabs.com
www.curvolabs.com
58 Adams Ave
Evansville, IN 4771
hi all,
i'm attempting to suggest products across a range to users based on
dimensions. if there is a "5x10mm Drill Set" for instance and a competitor
sales something similar enough then i'd like to have it shown. the range,
however, would need to be dynamic. i'm thinking for our initial testing
p
thanks erick. will give it a whirl later today and report back tonight or
tomorrow. i imagine i'll have some more questions crop up :)
best,
--
*John Blythe*
Product Manager & Lead Developer
251.605.3071 | j...@curvolabs.com
www.curvolabs.com
58 Adams Ave
Evansville, IN 47713
On We
this site has been a great help to me in seeing how things shake out as far
as the scores are concerned: http://splainer.io/
--
*John Blythe*
Product Manager & Lead Developer
251.605.3071 | j...@curvolabs.com
www.curvolabs.com
58 Adams Ave
Evansville, IN 47713
On Thu, May 28, 2015 at 10:0
thematical functions
themselves
I get this error:
error": { "msg": "Error parsing fieldname", "code": 400 }
thanks for any assistance or insight
--
*John Blythe*
Product Manager & Lead Developer
251.605.3071 | j...@curvolabs.com
www.curvolabs.com
58 Adams Av
there something else i'm missing in the way i'm
constructing this?
thanks for helping me stumble through this!
--
*John Blythe*
Product Manager & Lead Developer
251.605.3071 | j...@curvolabs.com
www.curvolabs.com
58 Adams Ave
Evansville, IN 47713
On Thu, May 28, 2015 at 12:37
like it
(via morelikethis i assume?).
in either case, point 4 stands and i probably got carried away in the
learning process w/o stepping back to think about real life implementation
and workarounds.
thanks!
--
*John Blythe*
Product Manager & Lead Developer
251.605.3071 | j...@curv
ut the results, just
getting a successful query to run. upon getting to that point i would then
tailor the lower and upper bounds accordingly to begin testing more true to
life queries.
at any rate, the #4 point seems to be the path to take for the present.
thanks for the discussion!
--
*John Bly
morning everyone,
i'm attempting to find related documents based on a manufacturer's
competitor. as such i'm querying against the 'description' field with
manufacturer1's product description but running a filter query with
manufacturer2's name against the 'mfgname' field.
one of the ways that we
after further investigation it looks like the synonym i was testing against
was only associated with one of their multiple divisions (despite being the
most common name for them!). it looks like this may clear the issue up, but
thanks anyway!
--
*John Blythe*
Product Manager & Lead Devel
Thanks Erick!
On Mon, Jun 1, 2015 at 11:29 AM, Erick Erickson
wrote:
> For future reference, fq clauses are parsed just like the q clause;
> they can be arbitrarily complex.
> Best,
> Erick
> On Mon, Jun 1, 2015 at 5:52 AM, John Blythe wrote:
>> after further investigat
I may be answering the wrong question - but SolrCloud goes in by default on
8983, yes? Is yours currently on 8080?
I don't recall where, but I think I saw a config file setting for the port
number (In Solr I mean)
Am I on the right track or are you asking something other than how to get
Solr on
g4j.properties"
SOLR_LOGS_DIR="/var/solr/logs"
SOLR_PORT="8983"
On Wed, May 11, 2016 at 11:59 AM, John Bickerstaff wrote:
> I may be answering the wrong question - but SolrCloud goes in by default
> on 8983, yes? Is yours currently on 8080?
>
> I don't re
licas. So I’m looking to see if anyone knows what is the
> cleanest way to move from a Tomcat/8080 install to a Jetty/8983 one.
>
> Thanks
>
> > On May 11, 2016, at 1:59 PM, John Bickerstaff
> wrote:
> >
> > I may be answering the wrong question - but SolrCloud go
move from a Tomcat/8080 install to a Jetty/8983 one.
>
> Thanks
>
>> On May 11, 2016, at 1:59 PM, John Bickerstaff
wrote:
>>
>> I may be answering the wrong question - but SolrCloud goes in by default
on
>> 8983, yes? Is yours currently on 8080?
>>
>> I
I'm not a dev, but I would assume the following if I were concerned with
speed and atomicity
A. A commit WILL be reflected in all appropriate shards / replicas in a
very short time.
I believe Solr Cloud guarantees this, although the time frame
will be dependent on "B"
B. Network, proces
In case it's helpful for a quick and dirty peek at your facets, the
following URL (in a browser or Curl) will get you basic facets for a field
named "category" -- assuming you change the IP address / hostname to match
yours.
http:/XXX.XXX.XX.XX:8983/solr/statdx_shard1_replica3/select
q=*%3A*&rows=
I should clarify:
http:/XXX.XXX.XX.XX:8983/solr/yourCoreName/select
q=*%3A*&rows=0&wt=json&indent=true&facet=true&facet.field=category
"yourCoreName" will get built in for you if you use the Solr Admin UI for
queries --
On Fri, May 13, 2016 at 9:36 AM, John Bicke
I've been working on a less-complex thing along the same lines - taking all
the data from our corporate database and pumping it into Kafka for
long-term storage -- and the ability to "play back" all the Kafka messages
any time we need to re-index.
That simpler scenario has worked like a charm. I
f
not, how can I create a working collection with a single shard?
This is Solr-6.0.0 in cloud mode with zookeeper-3.4.8.
Thanks,
John
In your original command, you listed the same port twice. That may have
been at least part of the difficulty.
It's probably fine to just use one zk node - as the zookeeper instances
should be aware of each other.
I also assume that if your solr.in.sh (or windows equavalent) has the
properly form
it's roundabout, but this might work -- ask for the healthcheck status
(from the solr box) and hit each zkNode separately.
I'm on Linux so you'll have to translate to Windows... using the solr.cmd
file I assume...
./solr healthcheck -z 192.168.56.5:2181/solr5_4 -c collectionName
./solr healthche
I think those zk server warning messages are expected. Until you have 3
running instances you don't have a "Quorum" and the Zookeeper instances
complain. Once the third one comes up they are "happy" and don't complain
any more. You'd get similar messages if one of the Zookeeper nodes ever
went d
Having run the optimize from the admin UI on one of my three cores in a
Solr Cloud collection, I find that when I got to try to run it on one of
the other cores, it is already "optimized"
I realize that's not the same thing as an API call, but thought it might
help.
On Tue, May 17, 2016 at 11:22
On 17/05/16 11:56, Tom Evans wrote:
> On Tue, May 17, 2016 at 9:40 AM, John Smith wrote:
>> I'm trying to create a collection starting with only one shard
>> (numShards=1) using a compositeID router. The purpose is to start small
>> and begin splitting shards when t
hi all,
i'm going mad over something that seems incredibly simple. in an attempt to
maintain some order to my growing data, i've begun to employ dynamicFields.
basic stuff here, just using *_s, *_d, etc. for my strings, doubles, and
other common datatypes.
i have these stored but not indexed. i'm
never mind, the issue ended up being that i had the copyField for that uom
field in two places and hadn't realized it, doh!
--
*John Blythe*
Product Manager & Lead Developer
251.605.3071 | j...@curvolabs.com
www.curvolabs.com
58 Adams Ave
Evansville, IN 47713
On Tue, May 24, 2016 a
hi all,
i've got layered entities in my solr import. it's calling on some
transactional data from a MySQL instance. there are two fields that are
used to then lookup other information from other tables via their related
UIDs, one of which has its own child entity w yet another select statement
to
Hi all,
I'm creating a Solr Cloud that will index and search medical text.
Multi-word synonyms are a pretty important factor.
I find that there are some challenges around multi-word synonyms and I also
found on the wiki that there is a recommended 3rd-party parser
(synonym_edismax parser) created
l stand of course... If anyone on the list has
experience in this area...
Thanks.
On Thu, May 26, 2016 at 10:25 AM, John Bickerstaff wrote:
> Hi all,
>
> I'm creating a Solr Cloud that will index and search medical text.
> Multi-word synonyms are a pretty important factor.
>
> you might be interested in looking at:
> https://github.com/LucidWorks/auto-phrase-tokenfilter
>
>
> On 5/26/16, 9:29 AM, "John Bickerstaff" wrote:
>
> >Ahh - for question #3 I may have spoken too soon. This line from the
> >github repository readme suggests a
fixing typo:
http://wiki.apache.org/solr/QueryParser (search the page for
synonym_edismax)
On Thu, May 26, 2016 at 11:50 AM, John Bickerstaff wrote:
> Hey Jeff (or anyone interested in multi-word synonyms) here are some
> potentially interesting links...
>
> http://wiki.apa
oo gotcha. cool, will make sure to check it out and bounce any related
questions through here.
thanks!
best,
--
*John Blythe*
Product Manager & Lead Developer
251.605.3071 | j...@curvolabs.com
www.curvolabs.com
58 Adams Ave
Evansville, IN 47713
On Thu, May 26, 2016 at 1:45 PM, E
le the query time stuff might pose some issues, but probably
> not too bad, if there are any issues at all.
>
> I've had decent luck porting our various plugins from 4.10.x to 5.5.0
> because a lot of stuff is just Java, and it still works within the Jetty
> context.
>
at said, I generally prefer using SolrJ if DIH doesn't do the job
> after a day or two of fiddling, it gives more control.
>
> Good Luck!
> Erick
>
> On Thu, May 26, 2016 at 11:02 AM, John Blythe wrote:
> > oo gotcha. cool, will make sure to check it out and bounce any rela
ation I'll usually seek out a specialist to help
me make sure the query isn't wasteful. It frequently was and I learned a
lot.
On Thu, May 26, 2016 at 12:31 PM, John Bickerstaff wrote:
> It may or may not be helpful, but there's a similar class of problem that
> is fre
he related
data that the DIH is currently straining under due to the plethora of open
connections.
thanks for all the thoughts and sparks flying around on this one, guys!
best,
--
*John Blythe*
Product Manager & Lead Developer
251.605.3071 | j...@curvolabs.com
www.curvolabs.com
58 Adams Ave
t; problems with how queries are constructed from Lucene’s “sausagized” token
> stream.
>
> --
> Steve
> www.lucidworks.com
>
> > On May 26, 2016, at 2:21 PM, John Bickerstaff
> wrote:
> >
> > Thanks Chris --
> >
> > The two projects I'm aware of a
We had previously done something of the sort. With some sources of truth type
of cores we would do initial searches on customer transaction data before
fetching the related information from those "truth" tables. We would use the
various pertinent fields from results #1 to find related data in co
give it a whirl
--
*John Blythe*
Product Manager & Lead Developer
251.605.3071 | j...@curvolabs.com
www.curvolabs.com
58 Adams Ave
Evansville, IN 47713
On Mon, May 30, 2016 at 4:27 AM, Georg Sorst wrote:
> We've had good experiences with Solarium, so it's probably worth spen
configured.
If anyone out there has done this specific approach - could you validate
whether my thought process is correct and / or if I'm missing something?
Yes - I get that I can set it all up and try - but it's what I don't know I
don't know that bothers me...
On Fri, May 27
ColdFusion Developer*
>
> *CF Webtools*
> You Dream It... We Build It. <https://www.cfwebtools.com/>
> 11204 Davenport Suite 100
> Omaha, Nebraska 68154
> O: 402.408.3733 x128
> E: maryjo.smin...@cfwebtools.com
> Skype: maryjos.cfwebtools
>
>
> On Mon, May 30, 2016
This may be no help at all, but my first thought is to wonder if anything
else is already running on port 80?
That might explain the somewhat silent "fail"...
Nicely said by the way - resisting the urge
On Tue, May 31, 2016 at 2:02 PM, Teague James
wrote:
> Hello,
>
> I am trying to install S
or what else to do since he really doesn't know Solr well
> either.
>
> Mary Jo
>
>
>
>
> On Mon, May 30, 2016 at 7:49 PM, John Bickerstaff <
> j...@johnbickerstaff.com>
> wrote:
>
> > Thanks for the comment Mary Jo...
> >
> > The erro
27;ll try to update it soon. I've run
> the plugin on Solr 5 and 6, solrcloud and standalone. For running in
> SolrCloud make sure you follow
>
> https://cwiki.apache.org/confluence/display/solr/Adding+Custom+Plugins+in+SolrCloud+Mode
> On May 31, 2016 5:13 PM, "John Bick
assloader, I believe you can use whatever
> dir you want, with the appropriate bit of solrconfig.xml to load it.
> Something like:
>
>
>
> On 5/31/16, 2:13 PM, "John Bickerstaff" wrote:
>
> >All --
> >
> >I'm now attempting to use the hon_luc
e bit about the valuesourceyparser is a bit
confusing)
Thanks
On Tue, May 31, 2016 at 5:02 PM, John Bickerstaff
wrote:
> Thanks Jeff,
>
> I believe I tried that, and it still refused to load.. But I'd sure love
> it to work since the other process is a bit convoluted - altho
Yes - I'm being lazy, I know.
Thanks all!
On Tue, May 31, 2016 at 11:35 PM, Shawn Heisey wrote:
> On 5/31/2016 3:13 PM, John Bickerstaff wrote:
> > The suggestion on the readme is that I can drop the
> > hon_lucene_synonyms jar file into the $SOLR_HOME directory, but thi
ismaxQParserPlugin
>
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>
> at
> java.net.FactoryURLClassLoader.loadClass(URLClassLoader.java:814)
>
> at j
Ahhh - gotcha.
Well, not sure why it's not picked up - seems lots of other jars are...
Maybe Joe will comment...
On Wed, Jun 1, 2016 at 10:22 AM, MaryJo Sminkey wrote:
> That refers to running Solr in cloud mode. We aren't there yet.
>
> MJ
>
>
>
> On Wed
he start.jar in /opt/solr/server as long as I
issue the "cloud mode" flag or does that no longer work in 5.x?
Do I instead have to modify that start script in /etc/init.d ?
On Wed, Jun 1, 2016 at 10:42 AM, John Bickerstaff
wrote:
> Ahhh - gotcha.
>
> Well, not sure why it
, it
> really feels like a general solr config issue, but you could try some other
> foreign jar and see if that works.
> Here’s one I use: https://github.com/whitepages/SOLR-4449 (although this
> one is also why I use WEB-INF/lib, because it overrides a protected method,
> so it mig
Jun 1, 2016 at 12:42 PM, John Bickerstaff
wrote:
> So - the instructions on using the Blob Store API say to use the
> Denable.runtime.lib=true option when starting Solr.
>
> Thing is, I've installed per the "for production" instructions which gives
> me an entry
101 - 200 of 853 matches
Mail list logo