Do you have to group, or can you collapse instead?
https://cwiki.apache.org/confluence/display/solr/Collapse+and+Expand+Results
Cheers
Tom
On Tue, Jun 14, 2016 at 4:57 PM, Jay Potharaju wrote:
> Any suggestions on how to handle result grouping in sharded index?
>
>
> On Mon, Jun 13, 2016 at 1:
Hello,
I have below custom field type defined for solr 6.0.0
I am using above field to ensure that entire string is considered as single
token and search should be case insensitive.
It works for most of the scnearios with wildcard search.
Hi Roshan,
I think there are two options:
1) escape the space q=abc\ p*
2) use prefix query parser q={!prefix f=my_string}abc p
Ahmet
On Wednesday, June 15, 2016 3:48 PM, Roshan Kamble
wrote:
Hello,
I have below custom field type defined for solr 6.0.0
Great.
First option worked for me. I was trying with q=abc\sp*... it should be q=abc\
p*
Thanks
-Original Message-
From: Ahmet Arslan [mailto:iori...@yahoo.com]
Sent: Wednesday, June 15, 2016 6:25 PM
To: solr-user@lucene.apache.org; Roshan Kamble
Subject: Re: wildcard search for string h
I need help in understanding a query in solr cloud.
When user issues a query , there are are two phases of query - one with the
purpose(from debug info) of GET_TOP_FIELDS and another with GET_FIELDS.
This is having an effect on end to end performance of the application.
- what triggers (any compo
In addition to what Erick correctly proposed,
are you storing norms for your field of interest ( to boost documents with
shorter field values )?
If you are, I find suspicious "Sony Ear Phones" to win over "Ear Phones"
for your "Ear Phones" query.
What are the other factors currently involved in you
It sounds like you were somehow getting the old jar file
rather than the new one. It _should_ have worked to just
drop the new jar file in the directory, assuming you'd
removed all traces of the old jar file...
But if you have it working now, that's what counts.
FWIW,
Erick
On Tue, Jun 14, 2016
Hi all,
My team at work maintains a SolrCloud 5.3.2 cluster with multiple
collections configured with sharding and replication.
We recently backed up our Solr indexes using the built-in backup
functionality. After the cluster was restored from the backup, we
noticed that atomic updates of documen
Collapse would also not work since it requires all the data to be on the
same shard.
"In order to use these features with SolrCloud, the documents must be
located on the same shard. To ensure document co-location, you can define
the router.name parameter as compositeId when creating the collection.
Simplest, though a bit risky is to manually edit the znode and
correct the znode entry. There are various tools out there, including
one that ships with Zookeeper (see the ZK documentation).
Or you can use the zkcli scripts (the Zookeeper ones) to get the znode
down to your local machine, edit it
I'll be happy to commit it if someone fixes the current problems with it.
Best,
Erick
On Sun, Mar 6, 2016 at 6:44 PM, Zheng Lin Edwin Yeo
wrote:
> Thank you.
>
> Looking forward for this to be solved.
>
> Regards,
> Edwin
>
>
> On 7 March 2016 at 07:41, William Bell wrote:
>
>> Can we get this
Is it possible to give an example? I want doc1 to be explicitly routed to
"shard1" of my "implicit" collection and doc2 to "shard4". How can I do
that?
Creating an implicit collection with one of the example configurations of
the solr package, defining the "id" field as the router.field (not sure
Any distributed query falls into the two-phase process. Actually, I think some
components may require a third phase. (faceting?)
However, there are also cases where only a single pass is required. A
fl=id,score will only be a single pass, for example, since it doesn’t need to
get the field valu
Did you try setting the "magic" field _route_ in your docs to the
shard? Something like
doc.addField("_route", "shard1")?
Best,
Erick
On Wed, Jun 15, 2016 at 10:31 AM, nikosmarinos wrote:
> Is it possible to give an example? I want doc1 to be explicitly routed to
> "shard1" of my "implicit" coll
Thank you Jeff . Let me try out how much of improvement I get out of single
pass param
Sent from my iPhone
> On Jun 15, 2016, at 1:59 PM, Jeff Wartes wrote:
>
> Any distributed query falls into the two-phase process. Actually, I think
> some components may require a third phase. (faceting?)
>
I know this has been discussed in the past (although not too recently), but
the advices in those have failed us, so here we are.
Some basics:
We're running Solr 6 (6.0.0 48c80f91b8e5cd9b3a9b48e6184bd53e7619e7e3 -
nknize - 2016-04-01 14:41:49), on Java 8 (OpenJDK Runtime Environment
(build 1.8.0_7
Update on this:
I feel I have a good grasp of synonyms:
In that I am doing it only at query time and not at indexing time
It looks like this in Synonyms.txt
sarfaraz jamal,sasjamal, sas,sarfaraz,wiggidy
Each one of those bring back the exact same records.
However it only highlights Jamal (with
On 6/15/2016 3:05 PM, Cas Rusnov wrote:
> After trying many of the off the shelf configurations (including CMS
> configurations but excluding G1GC, which we're still taking the
> warnings about seriously), numerous tweaks, rumors, various instance
> sizes, and all the rest, most of which regardless
Hey Shawn! Thanks for replying.
Yes I meant HugePages not HugeTable, brain fart. I will give the
transparent off option a go.
I have attempted to use your CMS configs as is and also the default
settings and the cluster dies under our load (basically a node will get a
35-60s GC STW and then the ot
Hi,
When i am trying to search in Solr based on given search criteria it is
searching all the records which is taking some massive time to complete the
Query.
I want a solution where i can restrict search to find only till it find 500
records and then solr should stop the querying.
Since there
On the IRC channel, I ran into somebody who was having problems with
optimizes on their Solr indexes taking a really long time. When
investigating, they found that during the optimize, *reads* were
happening on their SSD disk at over 800MB/s, but *writes* were
proceeding at only 20 MB/s.
Looking
This is pretty much logically impossible. I'd also suggest that your
response times
are tunable, even a very common word such as "AND" shouldn't be taking 18
seconds for 10M docs.
Say you're returning the top 100 doc. You can't know whether the last document
scored should be in the top 100 until y
Hi,
I was trying to find what would be the best number of thread pool size that
needs to be configured on the source site in solrconfig.xml for cross
datacenter replication. We have one target replica and one shard, is it
recommended to have more than one thread?
If we have more than 1 thread, wi
The lsof command on our CentOS 5.4 64bit server is v4.78. It doesn't support
`lsof -PniTCP:$SOLR_PORT -sTCP:LISTEN' format in solr script, so when run 'solr
start' it show a lot of message, mainly 'unsupported TCP/TPI info selection'. I
google and find this link:
[SOLR-7998] Solr star
I installsolr 6.0.1 and create a core with my own config (copied and modified
from those in basic_config). Then enable classic index. All the dynamicField
tags are comment out in my schema.xml. However, when I click 'schema' menu item
of my core in admin UI. I see it listed these two ghost dyna
25 matches
Mail list logo