Hi all,
I currently working on upgrade sorl 1.4.1 to sorl latest stable release.
What is the latest stable release I can use?
Is there specfic things I need to look at when upgrade.
Need help
Thanks
Danesh
September 2014, Apache Solr™ 4.9.1 available
The Lucene PMC is pleased to announce the release of Apache Solr 4.9.1
Solr is the popular, blazing fast, open source NoSQL search platform
from the Apache Lucene project. Its major features include powerful
full-text search, hit highlighting, faceted
As mentioned in antoher post we (already) have a (Lucene-based) generic
indexing framework which allows any source/entity to provide
indexable/searchable data.
Sources may be:
pages
events
products
customers
...
As their names imply they have nothing in common ;) Never the less we'd like to
sear
4.10.1 out shortly is a good bet.
No idea about the upgrade specifically, but I would probably do some
reading of recent solrconfig.xml to get a hint of new features. Also,
schema.xml has a version number at the top. The default changed which
is controlled by that version number. So, it is somethi
On 22 September 2014 01:04, Clemens Wyss DEV wrote:
> All I have at hand is "Solr in Action" which doesn't (didn't) mention the
> copyField-wildcards...
Well, unless your implementation is also fully theoretical, you also
have all the various examples in the Solr distribution. They
demonstrate m
This confuses me a bit, aren't we already at 4.10.0?
But CHANGES.txt of 4.10.0 doesn't know anything about 4.9.1.
Is this an interim version or something about backward compatibility?
Regards
Am 22.09.2014 um 11:36 schrieb Michael McCandless:
> September 2014, Apache Solr™ 4.9.1 available
>
>
This is a bug fix release on top of 4.9. Only some important fixes from
4.10 and beyond were back-ported to the 4.9 branch. There may be a 4.10.1
release too very soon.
On Mon, Sep 22, 2014 at 5:54 PM, Bernd Fehling <
bernd.fehl...@uni-bielefeld.de> wrote:
> This confuses me a bit, aren't we alre
Hello,
I'm trying find the best way to "fake" the terms component for fuzzy
queries. That is, I need the full set of index terms for each of the
two queries "quidam~1" and "quidam~2".
I tried defining two suggesters with FuzzyLookupFactory, with
maxEdits=1 and 2 respectively, but the results
On 9/22/2014 6:24 AM, Bernd Fehling wrote:
> This confuses me a bit, aren't we already at 4.10.0?
>
> But CHANGES.txt of 4.10.0 doesn't know anything about 4.9.1.
>
> Is this an interim version or something about backward compatibility?
It's a bugfix release, fixing some showstopper bugs in a re
I'll merge back the 4.9.1 CHANGES entries so when we do a 4.10.1,
they'll be there ... and I'll also make sure any fix we backported for
4.9.1, we also backport for 4.10.1.
Mike McCandless
http://blog.mikemccandless.com
On Mon, Sep 22, 2014 at 9:11 AM, Shawn Heisey wrote:
> On 9/22/2014 6:24 A
Hello Erick.
Below is the information you requested. Thanks for your help!
On Fri, Sep 19, 2014 at 7:36 PM, Erick Erickson [via Lucene] <
ml-node+s472066n4160122...@n3.nabble.com> wrote:
> Hmmm, I'd have to see the schema definition for your description
> field. For this, the a
Hi All,
We at Lucidworks are pleased to announce the release of Lucidworks Fusion 1.0.
Fusion is built to overlay on top of Solr (in fact, you can manage multiple
Solr clusters -- think QA, staging and production -- all from our Admin).In
other words, if you already have Solr, simply poin
Nathaniel,
Can you show us all of the parameters you are sending to the spellchecker?
When you specify "alternativeTermCount" with "spellcheck.q=quidam", what are
the terms you expect to get back? Also, are you getting any query results
back? If you are using a "q" that returns results, or m
Hi James,
The request
/spellcheck?spellcheck=true&spellcheck.q=quiam&spellcheck.dictionary=fuzzy2
returns
quidam, quam, quia, quoniam, quidem, quadam, quodam, quoad, quedam,
quis, quae, quas, quem, quid, quin, qui, qua
Replacing quiam (not in the index) by quidam (in the index) returns
no
Did you try "spellcheck.alternativeTermCount" with DirectSolrSpellChecker? You
can set it to whatever low value you actually want it to return back to you
(perhaps 20 suggestions max?).
James Dyer
Ingram Content Group
(615) 213-4311
-Original Message-
From: Nathaniel Rudavsky-Brody [m
Yep, I tried it both as a default param in the request handler (as in
the config I sent), and in the request, but with no effect... That's
what surprised me, since it seems it should work.
On Mon, Sep 22, 2014 at 4:38 , Dyer, James
wrote:
Did you try "spellcheck.alternativeTermCount" with
Di
Hi,
First thanks for your advices.
I did some several tests and finally I could index all the data on my
SolrCloud cluster.
The error was client side, it's documented in this post :
http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201406.mbox/%3ccfc09ae1.94f8%25rebecca.t...@ucsf.edu%3E
"
Hello,
I have a solr index (12 M docs, 45Go) with facets, and I'm trying to
improve facet queries performances.
1/ I tried to use docvalue on facet fields, it didn't work well
2/ I tried facet.threads=-1 in my querie, and worked perfectely (from more
15s to 2s for longest queries)
3/ I'm tryi
On 9/22/2014 12:05 AM, William Bell wrote:
> Is there an easy way to get max() across documents?
I think the stats component is probably what you want. That component
seems to be enabled by default.
http://wiki.apache.org/solr/StatsComponent
Thanks,
Shawn
You should be able to use f..method=enum
Alan Woodward
www.flax.co.uk
On 22 Sep 2014, at 16:21, jerome.dup...@bnf.fr wrote:
> Hello,
>
> I have a solr index (12 M docs, 45Go) with facets, and I'm trying to
> improve facet queries performances.
> 1/ I tried to use docvalue on facet fields, it
DirectSpellChecker defaults to not suggest anything for terms that occur in 1%
or more of the total documents in the index. You can set this higher in
solrconfig.xml either with a fractional percent or a whole-number absolute
number of documents.
See
http://lucene.apache.org/core/4_10_0/sugge
I'll take a look at that. Thanks
-Original Message-
From: Apoorva Gaurav [mailto:apoorva.gau...@myntra.com]
Sent: Sunday, September 21, 2014 11:32 PM
To: solr-user
Subject: Re: Help on custom sort
Try using a custom value source parser and pass the "formula" of computing the
price to s
Thank you, that works!
I'd already tried several values for maxQueryFrequency, but apparently
without properly understanding it. I was confused by the line "A lower
threshold is better for small indexes" when in fact I need a high value
like 0.99, so every term returns suggestions. (Is it poss
Hi,
I'm trying to set up a multicore SolrCloud on HDFS. I am getting the
following error for all my cores when trying to start the server:
ERROR org.apache.solr.core.CoreContainer – Unable to create core:
org.apache.solr.common.SolrException: Schema Parsing Failed: unknown field
'id'. Schema fi
You cannot use 100% because, as you say, 1 is intepreted as "1 document". But
you can do something like 99.9% .
James Dyer
Ingram Content Group
(615) 213-4311
-Original Message-
From: Nathaniel Rudavsky-Brody [mailto:nathaniel.rudav...@gmail.com]
Sent: Monday, September 22, 2014 1
jerome.dup...@bnf.fr [jerome.dup...@bnf.fr] wrote:
> I have a solr index (12 M docs, 45Go) with facets, and I'm trying to
> improve facet queries performances.
> 1/ I tried to use docvalue on facet fields, it didn't work well
That was surprising, as the normal result of switching to DocValues is
I use Solr to index some products that have an ImageUrl field. Obviously some
of the images are duplicates. I would like to boost the rankings of products
that have unique images (i.e. more specifically, unique ImageUrl field
values, because I don't deal with the image binary).
By this I mean, if
: There is nothing wrong with the declaration of the 'id' field, and I have it
: working fine when it's not using SolrCloud/HDFS. One odd thing is the part
...I can't explain that, but as far as this...
: that says "Schema file is solr//schema.xml", because there is no
: schema file there. I hav
Thanks for that link. From what I read the performance difference is
negligible, especially if I would just be replacing one static field with a
dynamic one.
Erick Erickson wrote
> Sep 14, 2014; 12:06pm Re: Solr Dynamic Field Performance
>
>
> Dynamic fields, once they are actually _in_ a docu
Thanks. There is definitely a in each of the schemas.
I am using 4.7.2.
Here is one of the *schema.xml* (the others are similar):
id
: Thanks. There is definitely a in each of the schemas.
:
: I am using 4.7.2.
if this conig is working for you when you don't use zookeeper/hdfs then
you must be using a newer version of Solr when you test w/ zk/hdfs
4.8.0 is when the and section tags were deprecated.
in 4.7.x and earlier
NP, glad to help someone dig into the Solr code.
We'll wait for the patch ;)...
Erick
On Sun, Sep 21, 2014 at 8:52 PM, Anurag Sharma wrote:
> Hey Eric,
>
> It works like charm :).
> Thanks a lot for pin pointing the issue. My bad I was using the suspend=y
> option blindly.
>
> Thanks again,
> A
Depending on the size, I'd go for (a). IOW, I wouldn't change the
sharding to use (a), but if you have the same shard setup in that
case, it's easier.
You'd index a type field with each doc indicating the source of your
document. Then use the grouping feature to return the top N from each
of the
Hi solr experts,
I am building out a solr cluster with this configuration
3 external zookeeprs
15 solr instances (nodes)
3 shards
I need to start out with 3 nodes and remaining 12 nodes would be added to
cluster. I am able to create a collection with 3 shards. This process works
fine using colle
Probably go for 4.9.1. There'll be a 4.10.1 out in the not-too-distant
future that you can upgrade to if you wish. 4.9.1 -> 4.10.1 should be
quite painless.
But do _not_ copy your schema.xml and solrconfig.xml files over form
1.4 to 4.x. There are some fairly easy ways to shoot yourself in the
foo
You have your index and query time analysis chains defined much
differently. Omitting the WordDelimiterFilterFactory from the
query-time analysis chain will lead to endless problems.
With the definition you have, here are the terms in the index and
their term positions as below. This is available
Hello,
I am a non-techie who decided to download and install Solr 5.0 to parse data
for my community activism. Got it installed and running, updated the example
schema and installation with a bunch of CSV data. And went back to deal with
the first of two fields I deferred till later - dates an
This should be happening automatically by the tf/idf
calculations, which weighs terms that are rare in the
index more heavily than ones that are more common.
That said, at very low numbers this may be invisibly,
I'm not sure the relevance calculations for 3 as opposed
to 1 are very consequential.
The example schema and solrconfig are intended to
show you a large number of possibilities, they are
not necessarily intended to be "best practices". I would
argue that if you do _not_ want to have dynamic fields
defined, you should take them all out. And you should
take all of the other field defi
One other possibility in addition to Hoss' comments.
Did you load a version of your configs to ZooKeeper
sometime that didn't have these fields? I don't quite
know where the schema and solrconfig files came
from, but the fact that they're on a local disk says
nothing about what's in ZooKeeper. When
That page is talking about leaders/followers coming up
and going down, but pretty much after they've been
assigned in the first place. Your problem is just the
"assigned in the first place" bit.
Since Solr 4.8, there's the addreplica collections API
command that is what you want I think, see:
http
I think this'll help:
http://wiki.apache.org/solr/ScriptUpdateProcessor
Essentially, each time a document comes in to Solr,
this will get invoked on it. You'll have to do some
fiddling to get it right, you have to remove the field from
the doc and transform it then put it back. None of this
is ha
Hello Erick,
Thank you so much for your help. That makes perfect sense. I will do the
changes you suggest and let you know how it goes.
Thanks!
On Mon, Sep 22, 2014 at 4:12 PM, Erick Erickson [via Lucene] <
ml-node+s472066n4160547...@n3.nabble.com> wrote:
> You have your index and query time
Thanks Erick,
I expected to hear the dreaded word "programming" at some point and I guess
that point has arrived. Now that I know where and what to tinker with.
And I should have said 4.10 below, not 5.0.
On Sep 22, 2014, at 4:44 PM, Erick Erickson wrote:
> I think this'll help:
>
> htt
Thanks for the suggestions. I actually had both problems. I couldn't figure
out how to remove the configs from zookeeper through the cloud scripts, so I
just manually removed the files in the zookeeper data directory.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Schema-Pa
You could try - for your ideal scenario - creating an
UpdateRequestProcessor (URP) chain, that
includes:ParseDateFieldUpdateProcessorFactory
https://lucene.apache.org/solr/4_10_0/solr-core/org/apache/solr/update/processor/ParseDateFieldUpdateProcessorFactory.html
Notice that it has been designed f
: out how to remove the configs from zookeeper through the cloud scripts, so I
: just manually removed the files in the zookeeper data directory.
https://cwiki.apache.org/confluence/display/solr/Using+ZooKeeper+to+Manage+Configuration+Files
https://cwiki.apache.org/confluence/display/solr/Command
Hi there,
I'm using Solr 4.7 and find the fast vector highlighter is not as fast as
it used to be in solr 3.x. It seems the results are not cached, even after
several hits of the same query, it still takes dozens of milliseconds to
return. Any idea or solution is appreciated. Thanks.
Alexandre:
Honest, I looked for that but was in a rush and couldn't find it and
thought I was remembering something _else_.
That's definitely a better approach, thanks! Perhaps this time I'll
remember
Erick
On Mon, Sep 22, 2014 at 3:23 PM, Alexandre Rafalovitch
wrote:
> You could try - fo
Thanks Alex and Erick for quick response,
This is really helpful.
On Tue, Sep 23, 2014 at 1:19 AM, Erick Erickson
wrote:
> Probably go for 4.9.1. There'll be a 4.10.1 out in the not-too-distant
> future that you can upgrade to if you wish. 4.9.1 -> 4.10.1 should be
> quite painless.
>
> But do _
Hi Grant.
Will there be a Fusion demostration/presentation at Lucene/Solr Revolution
DC? (Not listed in the program yet).
Thomas Egense
On Mon, Sep 22, 2014 at 3:45 PM, Grant Ingersoll
wrote:
> Hi All,
>
> We at Lucidworks are pleased to announce the release of Lucidworks Fusion
> 1.0. Fusi
51 matches
Mail list logo