Thanks for replying.
I'm not sure if I can do something like this with TrieLongField.
My solr document:
{
"myRange": "[100 TO 200]"
}
And then query it like this:
"myRange":101
It would fail at importing the document, am I missing something?
Michał
2015-09-23 3:25 GMT+
Further in this please see an example below:
http://solr:port
/solr/cloud1_shard1_replica1/select?q=UCID%3Addfdf4&fl=UCID&wt=json&indent=true
{
"responseHeader":{
"status":0,
"QTime":1157,
"params":{
"q":"UCID:ddfdf4",
"indent":"true",
"fl":"UCID",
"wt":"jso
I am getting an issue with solrcloud the stored field is not reflecting in
search where as we are able to get result
It sounds like you want a TrieLongField, to me. Check it out in the field
types here -
https://cwiki.apache.org/confluence/display/solr/Field+Types+Included+with+Solr
On Tue, Sep 22, 2015 at 6:31 PM, Michal Fijolek
wrote:
> Hi,
> I wanted to use something like DateRangeField, but only for numeri
Don’t do anything. Solr will automatically clean up the deleted documents for
you.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Sep 22, 2015, at 6:01 PM, CrazyDiamond wrote:
>
> my index is updating frequently and i need to remove unused docume
Avoid optimize like the plague.
Instead focus on tuning the segment merging process. As you commit index
files, segments are created. But they're periodically merged. Merging
removes remnants of the tombstoned docs. You can optimize this, tune it,
etc. If you're dealing with a lot of updates, thi
my index is updating frequently and i need to remove unused documents from
index after update/reindex.
Optimizaion is very expensive so what should i do?
--
View this message in context:
http://lucene.472066.n3.nabble.com/is-there-a-way-to-remove-deleted-documents-from-index-without-optimize-tp
On 9/22/2015 11:54 AM, vsilgalis wrote:
> I've actually read that article a few times.
>
> Yeah I know we aren't perfect in opening searchers. Yes we are committing
> from the client, this is something that is changing in our next code
> release, AND we are auto soft committing every second.
>
>
Hi,
I wanted to use something like DateRangeField, but only for numerical
ranges, not dates, so I'm looking something like NumericalRangeField.
I see that DateRangeField works with some numbers up to Integer.MAX_VALUE.
It's kind of hack, because parsing a year in a method
DateRangePrefixTree.parseC
Hi,
I wanted to use something like DateRangeField, but only for numerical
ranges, not dates, so I'm looking something like NumericalRangeField.
I see that DateRangeField works with some numbers up to Integer.MAX_VALUE.
It's kind of hack, because parsing a year in a method
DateRangePrefixTree.parseC
Hi Mark,
let's summarise a little bit:
First of all you are using the IndexBasedSpellChecker which is what is
usually called "based on the sidecar index" .
Basically you are building a mini lucene index to be used with the
spellcheck component.
It behaves as a classic Lucene index, so it needs com
Let's back up quite a ways here. Where did the 20G file come from?
Indexing files in JSON requires that they follow a very specific format,
Solr doesn't index arbitrary JSON files.
With that out of the way, yes, 20G is unlikely to work without tweaking
some parameters in both solrconfig.xml (there
Yep. Sounds bad.
First of all, your filterCache will potentially occupy
(maxDoc / 8) * 32,768 bytes, plus some slop.
Additionally you're replaying the last 256 filter queries every time
you open a new searcher (i.e. do a soft commit or hard commit with
openSearcher=true. Actually. probably whenev
Hi,
I am relatively new to Solr and have usage query. I have a 20 GB JSON file
which I want to upload into my solr. Do I have to form smaller chunks or is
there a way to upload the whole thing in one go?
I am getting the following error with bin/post:
Entering auto mode. File endings considered
FWIW, there is work being done for "high cardinality faceting" with
some of the recent Streaming Aggregation code.
So it's at least on the way if not already there.
Erick
On Tue, Sep 22, 2015 at 11:44 AM, Toke Eskildsen
wrote:
> adfel70 wrote:
>> Hi Toke, Thank you for the detailed explanatio
adfel70 wrote:
> Hi Toke, Thank you for the detailed explanation, thats exactly what I was
> looking for, except this algorithm fit single index only. could you please
> elaborate what adjustments are needed for distributed index?
Vanilla Solr requests top-X terms from each shard, with over-provi
Our auto setup sequence is:
1.deploy 3 zk nodes
2. Deploy solr nodes and start them connecting to zk.
3. Upload collection config to zk.
4. Call create collection rest api.
5. Done. SolrCloud ready to work.
Don't yet have automation for replacing or adding a node.
On Sep 22, 2015 18:27, "Steve Dav
Thanks Anshum
On Mon, Sep 21, 2015 at 6:23 PM, Anshum Gupta
wrote:
> CloudSolrClient is thread safe and it is highly recommended you reuse the
> client.
>
> If you are providing an HttpClient instance while constructing, make sure
> that the HttpClient uses a multi-threaded connection manager.
>
Erick Erickson wrote
> Things shouldn't be going into recovery that often.
>
> Exceeding the maxwarming searchers indicates that you're committing
> very often, and that your autowarming interval exceeds the interval
> between
> commits (either hard commit with openSearcher set to true or soft
> c
Things shouldn't be going into recovery that often.
Exceeding the maxwarming searchers indicates that you're committing
very often, and that your autowarming interval exceeds the interval between
commits (either hard commit with openSearcher set to true or soft commits).
I'd focus on that bit fir
Hi Alessandro,
I think you are facing this issue:
https://issues.apache.org/jira/browse/SOLR-6246
Ludovic.
-
Jouve
France.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Cloud-suggesters-Collection-Reload-Fail-tp4230478p4230591.html
Sent from the Solr - User mai
Thanks Lorenzo
Don't know why, but i didn't get it while i've first read your mail. pretty sure
that worked earlier, the fix itself should be rather easy - i'll attached a
patch
to Shawn's Ticket.
What's supposed to happen (in the current ui): reload the current address,
which is the same as y
We have a collection with 2 shards, 3 nodes per shard running solr 4.10.2
Our issue is that cores that get in recovery never recover, they are in a
constant state of recovery unless we restart the node and then reload the
core on the leader. Updates seem to get to the server fine as the
transacti
Faceting on an author field is almost always a bad idea. Or at least a slow,
expensive idea.
Faceting makes big in-memory lists. More values, bigger lists. An author field
usually has many, many values, so you will need a lot of memory.
wunder
Walter Underwood
wun...@wunderwood.org
http://obser
Can you also try testing with one facet at a time and see if we hit a
particular facet that is slow?
Joel Bernstein
http://joelsolr.blogspot.com/
On Tue, Sep 22, 2015 at 9:36 AM, Uwe Reh wrote:
> The exact version as shown by the UI is:
> - solr-impl 5.3.0 1696229 - noble - 2015-08-17 17:10:4
Hi,
I am trying to come up with a repeatable process for deploying a Solr Cloud
cluster from scratch along with the appropriate security groups, auto
scaling groups, and custom Solr plugin code. I saw that LucidWorks created
a Solr Scale Toolkit but that seems to be more of a one-shot deal than
re
OK, I gave each of these spellcheckIndexDir tokens distinct location --
from each other and from the main index. This has resolved the
write.lock problem when I attempt a spellcheck.build! Thanks for the help!
I looked in the new spellcheckIndexDir location and the directory is
populated wit
@Stefan
I checked the browser console, no errors, and a bunch of requests doing
GET's either froms scripts, xhr, and some of them getting 200 and other
304's.
I am able to see the overlay, but this is when a click on "*watch changes*",
this feature actually works as expected.
It is the "Refresh
virtualvm_snapshot_solr5.3_facetting.csv
Description: MS-Excel spreadsheet
@Shawn, here's my browser config:
Google Chrome45.0.2454.93 (Official Build) (64-bit)Revision
ba1cb72081c2c07e4b689082852b1463fbca95f5-refs/branch-heads/2454@{#466}OSMac
OS X Blink537.36 (@202161)JavaScriptV8 4.5.103.31Flash18.0.0.232User
AgentMozilla/5.0
(Macintosh; Intel Mac OS X 10_10_5) AppleW
dang, sounds like Shawn is right on .. Lorenzo can you tell us more about
the system you're using including browser specifics?
-Stefan
On Tuesday, September 22, 2015 at 4:17 PM, Shawn Heisey wrote:
> On 9/22/2015 7:42 AM, Lorenzo Fundaró wrote:
> > when selecting a core, under the tab Plugi
On 9/22/2015 8:17 AM, Shawn Heisey wrote:
> On 9/22/2015 7:42 AM, Lorenzo Fundaró wrote:
>> when selecting a core, under the tab Plugin/Stats, if I click on "Refresh
>> Values" I get redirected to the dashboard, is this the right behaviour ? or
>> is it a bug ? I think it should stay on the page an
Hi Lorenzo
That is neither supposed to happen on 5.0.0 nor 5.3.0 - no matter if you're
using the current or the new admin ui (talking about the angular.js app)
Are you able to have a look at your browser console while that happens?
Do you get error messages? any other output?
What should happe
On 9/22/2015 7:42 AM, Lorenzo Fundaró wrote:
> when selecting a core, under the tab Plugin/Stats, if I click on "Refresh
> Values" I get redirected to the dashboard, is this the right behaviour ? or
> is it a bug ? I think it should stay on the page and refresh the stats. I
> am using solr 5.0.0, b
Hello folks,
when selecting a core, under the tab Plugin/Stats, if I click on "Refresh
Values" I get redirected to the dashboard, is this the right behaviour ? or
is it a bug ? I think it should stay on the page and refresh the stats. I
am using solr 5.0.0, but I found the same behaviour on 5.3.0.
The exact version as shown by the UI is:
- solr-impl 5.3.0 1696229 - noble - 2015-08-17 17:10:43
- lucene-impl 5.3.0 1696229 - noble - 2015-08-17 16:59:03
Unfortunately my skills in debugging are limited. So I'm not sure about
a 'deeper caller stack'.
Did you mean the attached snapshot from Vi
Mikhail,
Yes, both the Index-based and File-based spell checkers reference the
same index location. My understanding is they were supposed to. I
didn't realize this was for writing indexes. Rather, I thought this was
for reading the main index. So, I need to make 3 separate locations for
Hi guys,
it's the second time this morning I face this problem :
1) I create my Solr Cloud cluster with external Zk ensemble
2) I create my collection from hosted config in Zk
3) I index some docs ( Index is build, InfixSuggester index is built for my
3 suggesting dictionaries)
4) I change my conf
It's quite strange
https://issues.apache.org/jira/browse/SOLR-7730 significantly optimized DV
facets at 5.3.0 exactly by avoiding FileInfo merge.
Would you mind to provide deeper caller stack for
org.apache.lucene.index.FileInfos.MultibleFields.getMergedFieldInfos()?
Or a time spend in SlowComposit
here is my try to detect with VirtualVM some hot spots with VirtualVM.
Enviroment:
A newly started node with ~15 times the query:
http://yxz/solr/hebis/select/?q=darwin&facet=true&facet.mincount=1&facet.limit=30&facet.field=material_access&facet.field=department_3&facet.field=rvk_facet&facet.fie
Russ,
Do you mean you accelerate only sidecase with small cardinality, or your
problem is resolved in general and 2.3 sec is fine for you?
Regarding longValues. Is it multivalued? It might work if {!join} parser
pass
multipleValuesPerDocument=false into
http://lucene.apache.org/core/5_2_1/join/o
Hi,
I've just set-up solr 5.3 and moved the two indexes into it.
I've tried the join below
{!join FROM=stringValue TO=stringValue fromIndex=indexB
score=none}universe:uniA"
and the qTime was 2.3 seconds
"QTime": 2326
Which is much much better than before, I also tried the longValues
{!join FROM
Am 22.09.2015 um 02:12 schrieb Joel Bernstein:
Have you looked at your Solr instance with a cpu profiler like YourKit? It
would be useful to see the hotspots which should be really obvious with 20
second response times.
No, until now I have done no profiling. I thought the unused
fieldValueCac
43 matches
Mail list logo