Hello,
I believe getHighlighting() returns Map>> .
Generally Maps are not expected to iterate in order unless you know
the underlying implementation of the Map, for example LinkedHashMap
will iterate in the insertion order and HashMap will not.
You should be able to take the doc id from one of t
Hello,
The QueryRequest was just an example, it will work with any request
that extends SolrRequest.
How are you indexing your documents?
I am going to assume you are doing something like this:
SolrClient client = ...
client.add(solrInputDocument);
Behind the scenes this will do something like
Hello,
The exception you are getting looks more like you can't connect to the
IP address from where your SolrJ code is running, but not sure.
For the basic credentials, rather than trying to do something with the
http client, you can provide them on the request like this:
QueryRequest req = new
You should be able to start your Solr instances with "-h ".
On Thu, Feb 9, 2017 at 12:09 PM, Xie, Sean wrote:
> Thank you Hrishikesh,
>
> The cluster property solved the issue.
>
> Now we need to figure out a way to give the instance a host name to solve the
> SSL error that IP not matching the
I had success doing something like this, which I found in some of the Solr
tests...
SolrResourceLoader loader = new SolrResourceLoader(solrHomeDir.toPath());
Path configSetPath = Paths.get(configSetHome).toAbsolutePath();
final NodeConfig config = new
NodeConfig.NodeConfigBuilder("embeddedSolrSer
A possible problem might be that your certificate was generated for
"localhost" which is why it works when you go to https://localhost:8985/solr
in your browser, but when SolrJ gets the cluster information from ZooKeeper
the hostnames of the Solr nodes might be using an IP address which won't
work
Hello,
I think part of the problem is the mis-match between what you are
highlighting on and what you are searching on.
Your query has no field specified so it must be searching a default field
field which looks like it would be _text_ since the copyField was setup to
copy everything to that fiel
e and instancePath.
If I remove the core.properties from src/test/resources/exampleCollection,
then it can write a new one to target/test-classes/exampleCollection, and
will even put the dataDir there by default.
On Mon, Oct 3, 2016 at 7:00 PM, Bryan Bende wrote:
> Yea I'll try to
;
> Alan Woodward
> www.flax.co.uk
>
>
> > On 3 Oct 2016, at 23:50, Bryan Bende wrote:
> >
> > Alan,
> >
> > Thanks for the response. I will double-check, but I believe that is going
> > to put the data directory for the core under coreHome/coreName.
> &
thoughts let me know.
Thanks,
Bryan
On Mon, Oct 3, 2016 at 2:07 PM, Alan Woodward wrote:
> This should work:
>
> SolrCore solrCore
> = coreContainer.create(coreName,
> Paths.get(coreHome).resolve(coreName),
> Collections.emptyMap());
>
>
> Alan Woodward
>
Curious if anyone knows how to create an EmbeddedSolrServer in Solr 6.x,
with a core where the dataDir is located somewhere outside of where the
config is located.
I'd like to do this without system properties, and all through Java code.
In Solr 5.x I was able to do this with the following code:
r successive
> search components _in the same query_. Otherwise, each component
> might have to do a disk seek.
>
> So I must be missing why you want to do this.
>
> Best,
> Erick
>
> On Thu, May 28, 2015 at 1:23 PM, Bryan Bende wrote:
> > Is there a way to the document c
Is there a way to the document cache on a per-query basis?
It looks like theres {!cache=false} for preventing the filter cache from
being used for a given query, looking for the same thing for the document
cache.
Thanks,
Bryan
I'm trying to identify the difference between an exception when Solr is in
a bad state/down vs. when it is up but an invalid request was made (maybe
some bad data sent in).
The JavaDoc for SolrRequest process() says:
*@throws SolrServerException if there is an error on the Solr server@throws
IOE
Does anyone have experience tracking documents that a user "liked" /
"disliked" and then incorporating that into a MoreLikeThis query?
The idea would be to exclude any document a user disliked from ever
returning as a similar document, and to boost any document a user liked so
it shows up higher i
Does SolrJ have anything that allows you to change the update handler and
add something besides a SolrInputDocument?
I'm trying to figure out how to add JSON documents using the custom JSON
update handler (http://lucidworks.com/blog/indexing-custom-json-data/), but
doing it through SolrJ in order
When I've run an optimize with Solr 4.8.1 (by clicking optimize from the
collection overview in the admin ui) it goes replica by replica, so it is
never doing more than one shard or replica at the same time.
It also significantly slows down operations that hit the replica being
optimized. I've see
You can try lowering the mergeFactor in solrconfig.xml to cause more merges
to happen during normal indexing, which should result in more deleted
documents being removed from the index, but there is a trade-off
http://wiki.apache.org/solr/SolrPerformanceFactors#mergeFactor
On Mon, Sep 29, 201
I ran into this problem as well when upgrading to Solr 4.8.1...
We had a somewhat large binary field that was "indexex=false stored=true",
but because of the copyField copying "*" to "text" it would hit the immense
term issue.
In our case we didn't need this field to be indexed (parts of it were
Theoretically this shouldn't happen, but is it possible that the two
replicas for a given shard are not fully in sync?
Say shard1 replica1 is missing a document that is in shard1 replica2... if
you run a query that would hit on that document and run it a bunch of
times, sometimes replica 1 will ha
; --Andrew Shumway
>
>
> -Original Message-
> From: Bryan Bende [mailto:bbe...@gmail.com]
> Sent: Friday, August 22, 2014 9:01 AM
> To: solr-user@lucene.apache.org
> Subject: Re: Incorrect group.ngroups value
>
> Thanks Jim.
>
> We've been using the composi
be located in the same
> shard, otherwise the count will be incorrect. If you are using SolrCloud
> <https://wiki.apache.org/solr/SolrCloud>, consider using "custom hashing"*
>
> Cheers,
> Jim
>
>
>
> 2014-08-21 21:44 GMT+02:00 Bryan Bende :
>
>
Is there any known issue with using group.ngroups in a distributed Solr
using version 4.8.1 ?
I recently upgraded a cluster from 4.6.1 to 4.8.1, and I'm noticing several
queries where ngroups will be more than the actual groups returned in the
response. For example, ngroups will say 5, but then th
Does anyone know if it is possible to get data ranges working with the
ComplexPhraseQueryParser?
I'm using Solr 4.8.1 and seeing the same behavior described in this post:
http://stackoverflow.com/questions/19402268/solr-4-2-1-and-solr-1604-complexphrase-and-date-range-queries-do-not-work-toge
I
This is using the solr.TrieDateField, it is the field type "date" from the
example schema in solr 4.6.1:
After further testing I was only able to reproduce this in a sharded &
replicated environment (numShards=3, replicationFactor=2) and I think I
have narrowed down the issue, and at this point i
Using Solr 4.6.1 and in my schema I have a date field storing the time a
document was added to Solr.
I have a utility program which:
- queries for all of the documents in the previous day sorted by create date
- pages through the results keeping track of the unique document ids
- compare the total
Does calling commit with expungeDeletes=true result in a full rewrite of
the index like an optimize does? or does it only merge away the documents
that were "deleted" by commit?
Every two weeks or so we run a process to rebuild our index from the
original documents resulting in a large amount of d
I'm running Solr 4.3 with:
6
false
5000
When I start Solr and send in a couple of hundred documents, I am able to
retrieve documents after 5 seconds using SolrJ. However, from the Solr
admin console if I query for *:* it will show that there are docs in the
numFound attribute, but
Can you just use two queries to achieve the desired results ?
Query1 to get all actions where !entry_read:1 for some range of rows (your
page size)
Query2 to get all the entries with an entry_id in the results of Query1
The second query would be very direct and only query for a set of entries
equ
box/lucene-solr-user/201306.mbox/%3CCALo_M18WVoLKvepJMu0wXk_x2H8cv3UaX9RQYtEh4-mksQHLBA%40mail.gmail.com%3E
> What type of field are you grouping on? What happens when you distribute
> ?it? I.e. what specifically goes wrong?
> Upayavira
On Tue, Jun 25, 2013, at 09:12 PM, Bryan Bende wr
I was reading this documentation on Result Grouping...
http://docs.lucidworks.com/display/solr/Result+Grouping
which says...
sort - sortspec - Specifies how Solr sorts the groups relative to each
other. For example, sort=popularity desc will cause the groups to be sorted
according to the highest
I'm wondering what the expected behavior is for the following scenario...
We receive the same document in multiple formats and we handle this by
grouping, sorting the group by date received, and limiting the group to 1,
resulting in getting the most recent version of a document.
Here is an exampl
32 matches
Mail list logo