No, there is no such API. It will be a good improvement though. Mind
creating a Jira issue?
On Sat, Aug 2, 2014 at 8:35 AM, Ramana OpenSource <
ramanaopensou...@gmail.com> wrote:
> Hi All,
>
> I am using Replication backup command to create snapshot of my index.
>
> http://localhost:8983/solr/re
Elasticsearch and Solr are both based on Lucene, so a sizeable fraction of
performance will be similar if not identical.
IOW, they are both using the same "search engine" under the hood.
Sure, the right "tires", "transmission", and "body" can make a big
difference in performance as well, but t
Hi All,
I am using Replication backup command to create snapshot of my index.
http://localhost:8983/solr/replication?command=backup&numberToKeep=2
At any point, If I would like to know how many number of back ups
available, do we have any API that supports this ?
The close one i see is
http://l
I filed https://issues.apache.org/jira/browse/SOLR-6314 to track this issue
going forward.
Any ideas around this problem?
Thanks,
Vamsee
On Tue, Jul 29, 2014 at 4:00 PM, Vamsee Yarlagadda
wrote:
> Hi,
>
> I am trying to work with multi-threaded faceting on SolrCloud and in
> the process i
On 8/1/2014 3:17 PM, Ethan wrote:
> Our SolrCloud setup : 3 Nodes with Zookeeper, 2 running SolrCloud.
>
> Current dataset size is 97GB, JVM is 10GB, but 6GB is used(for less garbage
> collection time). RAM is 96GB,
>
> Our softcommit is set to 2secs and hardcommit is set to 1 hour.
>
> We are sud
4.5.0.
We are trying to free memory by deleting data from 2010. But that hasn't
helped so far.
On Fri, Aug 1, 2014 at 3:13 PM, Otis Gospodnetic wrote:
> Which version of Solr?
>
> Otis
> --
> Performance Monitoring * Log Analytics * Search Analytics
> Solr & Elasticsearch Support * http://sema
Which version of Solr?
Otis
--
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/
On Fri, Aug 1, 2014 at 11:17 PM, Ethan wrote:
> Our SolrCloud setup : 3 Nodes with Zookeeper, 2 running SolrCloud.
>
> Current dataset size is 97GB, JVM
Our SolrCloud setup : 3 Nodes with Zookeeper, 2 running SolrCloud.
Current dataset size is 97GB, JVM is 10GB, but 6GB is used(for less garbage
collection time). RAM is 96GB,
Our softcommit is set to 2secs and hardcommit is set to 1 hour.
We are suddenly seeing high disk and network IOs. During
Query results default to score. But spelling suggestions sort by edit
distance, with frequency as a secondary sort.
unie => unger = 2 edits
unie => unick = 2 edits
unie => united = 3 edits
unie => unique = 3 edits
... etc ...
James Dyer
Ingram Content Group
(615) 213-4311
-Original Mess
Everything that I read says that the default sort order is by Score, yet this
appears to me to be sorted by frequency:
10
Ummm, 400k documents is _tiny_ by Solr/Lucene standards. I've seen 150M
docs fit in 16G on Solr. I put 11M docs on my laptop
So I would _strongly_ advise that you don't worry about space at all as a
first approach and freely copy as many fields as you need to support your
use-case. Only after
On 8/1/2014 4:19 AM, anand.mahajan wrote:
> My current deployment :
> i) I'm using Solr 4.8 and have set up a SolrCloud with 6 dedicated machines
> - 24 Core + 96 GB RAM each.
> ii)There are over 190M docs in the SolrCloud at the moment (for all
> replicas its consuming overall disk 2340GB which
On 8/1/2014 3:22 AM, rulinma wrote:
> I use solrconfig.xml as follow:
>
>
> ${solr.ulog.dir:}
>
>
> 15000
> 5000
> false
>
>
>360
> 50
>
>
>
> I use 2000 docs to commit once, but I query find that 2002,2003,
On 8/1/2014 3:21 AM, Sören Schneider wrote:
> I'm looking for a way to (programmatically) replace a Solr index
> on-the-fly using SolrJ, just as mentioned in Solr CoreAdmin Wiki[1]. I
> already managed to create a dump of the index on-the-fly.
>
> The intention is to use the dump transparently whil
Hi,
We're in the development phase of a new application and the current dev
team mindset leans towards running Solr (4.9) in AWS without Zookeeper. The
theory is that we can add nodes quickly to our load balancer
programmatically and get a dump of the indexes from another node and copy
them over t
Thanks for the reply Shalin.
1. I'll try increasing the softCommit interval and the autoSoftCommit too.
One mistake I made that I realized just now is that I am using /solr/select
and expecting it to do an NRT - for NRT search its got to be /select/get
handler that needs to be used. Please confirm
Increasing autoCommit doesn't increase RAM consumption. It just means that
more items would be in transaction log and that node restart/recovery will
be slower.
On Fri, Aug 1, 2014 at 7:10 PM, anand.mahajan wrote:
> Oops - my bad - Its autoSoftCommit that is set after every doc and not an
> aut
Auto soft commuting that frequently can also dramatically impact
performance. Perhaps not nearly as much as a hard commit, but I would
still consider increasing it.
Also hard commits every 10 seconds at that volume is quite frequent.
I'd consider doing soft commits every 10 seconds and do hard co
Oops - my bad - Its autoSoftCommit that is set after every doc and not an
autoCommit.
Following snippet from the solrconfig -
1
true
1
Shall I increase the autoCommit time as well? But would that mean more RAM
is consumed by all instances running on the box?
Comments inline:
On Fri, Aug 1, 2014 at 3:49 PM, anand.mahajan wrote:
> Hello all,
>
> Struggling to get this going with SolrCloud -
>
> Requirement in brief :
> - Ingest about 4M Used Cars listings a day and track all unique cars for
> changes
> - 4M automated searches a day (during the inge
hello,
on the new suggester, when the field is multivalued="true", itsnot working
i need to try the patch "LUCENE-3842" to test auto complete but i dont know
how.
i have Solr-4.7.2 not source code.
can some one help?
Best regards,
Anass BENJELLOUN
--
View this message in context:
http://luce
Hello all,
Struggling to get this going with SolrCloud -
Requirement in brief :
- Ingest about 4M Used Cars listings a day and track all unique cars for
changes
- 4M automated searches a day (during the ingestion phase to check if a doc
exists in the index (based on values of 4-5 key fields) o
I use solrconfig.xml as follow:
${solr.ulog.dir:}
15000
5000
false
360
50
I use 2000 docs to commit once, but I query find that 2002,2003,2004 and so
on, not 2000,4000,6000 increase. I don't know why?
wh
Hi,
I'm looking for a way to (programmatically) replace a Solr index
on-the-fly using SolrJ, just as mentioned in Solr CoreAdmin Wiki[1]. I
already managed to create a dump of the index on-the-fly.
The intention is to use the dump transparently while rebuilding the
"original" index to achiev
On 01/08/2014 09:53, Alexandre Rafalovitch wrote:
Thank you Charlie, very informative even if non-scientific.
About the aggregations, are they very different from:
http://heliosearch.org/solr-facet-functions/ (obviously not yet
production ready)?
They're the same sort of thing. The ES signific
Thanks Jack ..
it works for me .
Regards
Pradip
--
View this message in context:
http://lucene.472066.n3.nabble.com/Search-on-Date-Field-tp4150076p4150573.html
Sent from the Solr - User mailing list archive at Nabble.com.
Thank you Charlie, very informative even if non-scientific.
About the aggregations, are they very different from:
http://heliosearch.org/solr-facet-functions/ (obviously not yet
production ready)?
Regards,
Alex.
Personal: http://www.outerthoughts.com/ and @arafalov
Solr resources and newslette
On 01/08/2014 06:43, Alexandre Rafalovitch wrote:
Maybe Charlie Hull can answer that:
https://twitter.com/FlaxSearch/status/494859596117602304 . He seems to
think that - at least in some cases - Solr is faster.
I'll try to expand on the tweet.
Firstly, this is a totally unscientific comparison
Perhaps the actual suggester module is a better fit then:
http://blog.mikemccandless.com/2012/09/lucenes-new-analyzing-suggester.html
http://romiawasthy.blogspot.fi/2014/06/configure-solr-suggester.html
Also:
http://jayant7k.blogspot.com/2014/03/an-interesting-suggester-in-solr.html
Regards,
Aha. I don't know if Solr Suggester can do that. Let's see what others
say. I know http://www.sematext.com/products/autocomplete/ could do that.
Otis
--
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/
On Fri, Aug 1, 2014 at 9:26 AM
We are having an issue in Phrase highlighter with Surround Query Parser
e.g. *"first thing" w/100 "you must" *brings correct results but also
highlights individual words of the phrase - first, thing are highlighted
where they come separately as well.
Any idea how this can be fixed?
--
Regards,
hello,
you didnt enderstand well my problem i give you exemple:
the document contain the word "genève".
q="gene" auto suggestion give "geneve"
q="genè" auto suggestion give "genève"
but what i need is q="gene" auto suggestion give "genève" with accent like
correction of word.
i tried to add spel
32 matches
Mail list logo