Hi Chris,
I have raised https://issues.apache.org/jira/browse/SOLR-7954 for the issue.
- What was the datatype of the field(s)?
The data type of fields which passes are of type string with following
attributes.
The data type of the fields which fails is of type string with docvalues
enabled and
Hi,
I would like to check, is there anyway to remove duplicate suggestions in
Solr?
I have several documents that looks very similar, and when I do a
suggestion query, it came back with all same results. I'm using Solr 5.2.1
This is my suggestion pipeline:
all
json
true
edismax
10
id, s
My scenario is something like this:
I have a students database. I want to query all the students who were either
`absent` or `present` during a particular `date-range`.
For example:
Student "X" was `absent` between dates:
Jan 1, 2015 and Jan 15, 2015
Feb 13, 2015 and Feb 16, 2015
On 7/8/2015 6:13 PM, Yonik Seeley wrote:
> On Wed, Jul 8, 2015 at 6:50 PM, Shawn Heisey wrote:
>> After the fix (with luceneMatchVersion at 4.9), both "aaa" and "bbb" end
>> up at position 2.
> Yikes, that's definitely wrong.
I have filed LUCENE-6889 for this problem. I'd like to write a unit
te
On 8/20/2015 4:27 PM, CrazyDiamond wrote:
> i have a DIH delta-import query based on last_index_time.it works perfectly
> But sometimes i add documents to Solr manually and i want DIH not to add
> them again.I have UUID unique field and also i have "id" from database which
> is marked as pk in DI
Hey guys, I just logged this bug and I wanted to raise awareness. If you
use the QueryElevationComponent, and ask for fl=[elevated], you'll get only
false if solr is using LazyDocuments. This looks even stranger when you
request exclusive=true and you only get back elevated documents, and they
al
i have a DIH delta-import query based on last_index_time.it works perfectly
But sometimes i add documents to Solr manually and i want DIH not to add
them again.I have UUID unique field and also i have "id" from database which
is marked as pk in DIH schema. my question is : will DIH update existin
Yes. Maybe. It Depends (tm).
Details matter (tm).
If you're firing just a few QPS at the system, then improved
throughput by adding replicas is unlikely. OTOH, if you're firing lots
of simultaneous queries at Solr and are pegging the processors, then
adding replication will increase aggregate QPS
On Thu, Aug 20, 2015, at 04:34 PM, Jean-Pierre Lauris wrote:
> Hi,
> I'm trying to obtain indexed tokens from a document id, in order to see
> what has been indexed exactly.
> It seems that DocumentAnalysisRequestHandler does that, but I couldn't
> figure out how to use it in java.
>
> The doc s
i have used json facet api and noticed that its relying heavily on filter
cache.
index is optimized and all my fields are with docValues='true' and the
number of documents are 2.6 million and always faceting on almost all the
documents with 'fq'
the size of documentCache and queryResultCache are
If this is for a quick test, have you tried just faceting on that
field with document ID set through query? Facet returns the
indexed/tokenized items.
Regards,
Alex.
Solr Analyzers, Tokenizers, Filters, URPs and even a newsletter:
http://www.solr-start.com/
On 20 August 2015 at 11:34,
I want to understand, why number of requests in SOLD CLOUD is different with
and without using of grouping feature.
1. suppose we have several shards in SOLR CLOUD ( lets say 3 shards )
2. One of them, gets a query with rows = n
3. This shards distributes a request among others and suppose tha
I took a quick look at FileListEntityProcessor#init, and it looks like it
applies the "excludes" regex to the filename element of the path only, and not
to the directories.
If your filenames do not have a naming convention that would let you use it
this way, you might be able to write a transfo
I am importing files from my file system and want to exclude import of files
from folder called templatedata. How do i configure that in entity.
excludes="templatedata" doesnt seem to work.
--
View this message in context:
http://lucene.472066.n3.nabble.com/exclude-folder-in-dataimport-hand
I see. The UninvertingReader even throws an IllegalStateException if you try
read a numeric field as a sorted doc values. I may have to index extra
fields to support my document collapsing scheme. Thanks for responding.
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-to-
Hi - we currently have a multi-shard setup running solr cloud without
replication running on top of HDFS. Does it make sense to use
replication when using HDFS? Will we expect to see a performance
increase in searches?
Thank you!
-Joe
Hi,
I'm trying to obtain indexed tokens from a document id, in order to see
what has been indexed exactly.
It seems that DocumentAnalysisRequestHandler does that, but I couldn't
figure out how to use it in java.
The doc says I must provide a contentstream but the available init() method
only takes
Ahh thank you,
that explains it I changed the port to 9983 not knowing that the stop port
would result to the old port.
so i guess i just need to change it to something else then.
> Subject: Re: How to configure solr to not bind at 8983
> To: solr-user@lucene.apache.org
> From: apa...@elyograg.o
On 8/20/2015 2:34 AM, Samy Ateia wrote:
> I changed the solr listen port in the solr.in.sh file in my solr home
> directory by setting the variable: SOLR_PORT=.
> But Solr is still trying to also listen on 8983 because it gets started with
> the -DSTOP.PORT=8983 variable.
>
> What is this -D
On 8/20/2015 1:49 AM, Merlin Morgenstern wrote:
> I am running 2 dedicated servers on which I plan to install Solrcloud with
> 2 solr nodes and 3 ZK.
>
> From Stackoverflow I learned that the best method for autostarting
> zookeeper on ubuntu 14.04 is to install it via "apt-get install
> zookeeper
You may want to see the logging level using the Dashboard URL
http://localhost:8983/solr/#/~logging/level & even can set for the session
but otherwise you can look into server/resources/log4j.properties. Refer
https://cwiki.apache.org/confluence/display/solr/Configuring+Logging
On Thu, Aug 20, 201
Hi All,
We have cluster environment on JBOSS, All of our deployed applications are
protected by OpenAM including SOLR. On Slave nodes we enabled SOLR to
communicate with master nodes to get data.
Since the SOLR on master is protected with OpenAM slave can't talk to it. In
Solr.xml there is a wa
when i use solrj api to add category data to solr ,
their will have a lot of DEBUG info ,
how to close this ,or how to set the log ?
ths
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-to-close-log-when-use-the-solrj-api-tp4224142.html
Sent from the Solr - User mailing
Hi Samy,
Any particular reason to not to use the -p paratmeter to start it on
another port?
./solr start -p 9983
With Regards
Aman Tandon
On Thu, Aug 20, 2015 at 2:02 PM, Modassar Ather
wrote:
> I think you need to add the port number in solr.xml too under hostPort
> attribute.
>
> STOP.PORT i
Thanks Erick. Even 1 second commit interval is fine for us. But in that case
also filter cache will be flushed after 1 sec. The end user will still feel
slowness due to this as the query will take around 1 sec if we use filter query.
-Original Message-
From: Erick Erickson [mailto:ericke
I changed the solr listen port in the solr.in.sh file in my solr home directory
by setting the variable: SOLR_PORT=.
But Solr is still trying to also listen on 8983 because it gets started with
the -DSTOP.PORT=8983 variable.
What is this -DSTOP.PORT variable for and where should I configure
I think you need to add the port number in solr.xml too under hostPort
attribute.
STOP.PORT is SOLR.PORT-1000 and set under /bin/solr file.
As far as I understand this can not be changed but I am not sure.
Regards,
Modassar
On Thu, Aug 20, 2015 at 11:39 AM, Samy Ateia wrote:
> I changed the so
You might want to look into the following documentation. These documents
have explanation on how to setup Zookeeper ensemble and Zookeeper
administration.
https://cwiki.apache.org/confluence/display/solr/Setting+Up+an+External+ZooKeeper+Ensemble
http://zookeeper.apache.org/doc/r3.4.6/zookeeperAdmi
I am running 2 dedicated servers on which I plan to install Solrcloud with
2 solr nodes and 3 ZK.
>From Stackoverflow I learned that the best method for autostarting
zookeeper on ubuntu 14.04 is to install it via "apt-get install
zookeeperd". I have that running now.
How could I add a second zook
29 matches
Mail list logo