Hi - Sorry it was very late at night for me and I think I didn't pick my
wordings right.
bq: it is indeed returning documents with only either one of the two query
terms
What I meant was: Initially, I thought it was only returning documents
which contained both 'tv' and 'promotion'. Then I realiz
Hi Guys,
I have set up SolrCloud for production but ready to use and currently Solr
running with two core in production. SolrCloud machines are separate than
standalone Solr and has two collections in SolrCloud similar to Solr.
Is it possible and would be useful. If I could be replicate data fr
You say you have two cores. Are Tha same collection? That is, are you doing
distributed search? If not, you can use the replication APIs fetchindex
command to manually move them.
For that matter, you can just scp the indexes over too, they're just files.
If you're doing distributed search on your
bq: I hope that clears the confusion.
Nope, doesn't clear it up at all. It's not clear which query you're
talking about at least to me.
If you're searching for
name:tv AND name:promotion
and getting back a document that has only "tv" in the name field
that's simply wrong and you need to find
Agreed, you need to show the debug query info from your original query:
My syntax is something like this:
>> >>> >> http://localhost:8983/solr/sales/select?indent=on&wt=json&;
>> >>> >> fl=*,score&q=name:tv
>> >>> >> promotion
and could probably help you get the results you want
On Thu, Jun 8,
Sorry I did not give enough information.
"doesn't work" does mean that the documents are not getting indexed. I am
using a full import. I did discover that if I used the Linux touch command
that the document would re-index. I don't have any of the logs as I have been
able to get the document
Hi Team,
Is there any way we can bring down ZK without impacting Solr ?
I know it might be a silly question as Solr tolly depends in ZK for all I/O
operations and configuration changes.
Thanks,
Venkat.
Thanks Erick
No, I'm not doing distributed search. These two core with different type of
information.
If I understand you correctly, I can just use scp to copy index files from
solr to any shard of solrcloud and than solrcloud would balance the data
itself.
Cheers
On Thu, 8 Jun 2017 at 15:4
I figured out why it was not re-indexing without changing the timestamp even on
the full import. In my DIH I had a parameter in my top level entity that was
checking for the last indexed time.
~~~
William Kevin Miller
ECS Federal, Inc.
USPS/MTSC
(405) 573-2158
-Ori
bq: would balance the data itself.
not if you mean split it up amongst shards. The entire index would be
on a _single_ shard. If you then do ADDREPLICA on that shard it'll
replicate the entire index to each replica
Also note that when you scp stuff around I'd recommend the destination
Solr node b
Thanks for bring closure to that.
Erick
On Thu, Jun 8, 2017 at 9:12 AM, Miller, William K - Norman, OK -
Contractor wrote:
> I figured out why it was not re-indexing without changing the timestamp even
> on the full import. In my DIH I had a parameter in my top level entity that
> was checkin
Well, it depends on what you mean by "impacting".
When ZK drops below quorum you will no longer be able to send indexing
requests to Solr, they'll all fail. At least they better ;).
_Queries_ should continue to work, but you're in somewhat uncharted
territory, nobody I know runs that way very lon
Thanks Erick.
On Thu, 8 Jun 2017 at 17:28 Erick Erickson wrote:
> bq: would balance the data itself.
>
> not if you mean split it up amongst shards. The entire index would be
> on a _single_ shard. If you then do ADDREPLICA on that shard it'll
> replicate the entire index to each replica
>
> Als
I wanted to ask the properly way to query or get the length of a field in
solr.
I'm trying to ask and append fieldNorm in a result field by querying
localhost:8983/solr/uda/tvrh?q=usage:stuff&fl={!func}norm(usage)&debugQuery=on&debugQuery=on
Nevertheless, the response to this query is:
I tried to reproduce in on the recent release. Here is what I've got after
adding distrib=false
requires a TFIDFSimilarity (such as
ClassicSimilarity) java.lang.UnsupportedOperationException:
requires a TFIDFSimilarity (such as ClassicSimilarity) at
org.apache.lucene.queries.function.valuesource.N
Hi, thanks for reply.
After adding true on distrib, with query
localhost:8983/solr/uda/tvrh?q=usage:stuff&fl={!func}norm(usage)&debugQuery=on&distrib=true
I've got something similar, I append the complete solr log.
2017-06-08 20:22:02.065 INFO (qtp1205044462-18) [c:uda s:shard2
r:core_node2 x:
You probably need to configure TFIDFSimilarity or ClassicSimilarity in
schema and rebuild your index. Otherwise, norm() seems unuseful to me.
On Thu, Jun 8, 2017 at 11:24 PM, tstusr wrote:
> Hi, thanks for reply.
>
> After adding true on distrib, with query
>
> localhost:8983/solr/uda/tvrh?q=usa
Hi,
I have a solr cloud setup, with document routing (implicit routing with router
field). As the index is about documents with a publication date, I routed
according the publication year, as in my case, most of the search queries will
have a year specified.
Now, what would be the best strateg
Hi,
I am trying to understand what the possible root causes for the
following exception could be.
java.io.FileNotFoundException: File does not exist:
hdfs://*/*/*/*/data/index/_2h.si
I had some long GC pauses while executing some queries which took some
of the replicas down. But how can that a
You mentioned most of the searches will use document routing based on year
as route key, correct? and then you mentioning huge amount of searches
again without routing. Can you give some no# how many will utilise routing
vs not routing?
In general, we should try to serve all the queries with one
correction: shared => sharded
On Thu, Jun 8, 2017 at 10:10 PM, Susheel Kumar
wrote:
> You mentioned most of the searches will use document routing based on year
> as route key, correct? and then you mentioning huge amount of searches
> again without routing. Can you give some no# how many will
We have important entities referenced in indexed documents which have
convention naming of geographicname-number. Eg Wainui-8
I want the tokenizer to treat it as Wainui-8 when indexing, and when I search I
want to a q of Wainui-8 (must it be specified as Wainui\-8 ??) to return docs
with Wainui-
Do a search with:
fl=id,title,datasource&hl=true&hl.method=unified&limit=50&page=1&q=pressure+AND+testing&rows=50&start=0&wt=json
and I get back a good list of documents. However, some documents are returning
empty fields in the highlighter. Eg, in the highlight array have:
"W:\\Reports\\OCR\\427
I want to monitor my Solr instances using JMX and graph performance. Using
Zabbix notation, I end up with a key that looks like this:
jmx["solr/suburbs-1547_shard1_replica1:type=standard,id=org.apache.solr.handler.component.SearchHandler","5minRateReqsPerSecond"]
My problem here is that the key
24 matches
Mail list logo