Yes, you are right. It worked !
-Vivek
On Mon, Aug 25, 2014 at 7:39 PM, Ahmet Arslan
wrote:
> Hi Vivek,
>
> how about this?
>
> Iterator iter = queryResponse.getResults().iterator();
>
> while (iter.hasNext()) {
> SolrDocument resultDoc = iter.next();
>
> Collection content =
>
Hello everyone :)
I have an index for groupId and one for product. For an input search
keyword, I only want to boost the result if the keyword appears in both
groupId and product indices.
I was able to get Solr join with fq to work with the following syntax:
example: q=searchTerm&fq={!join from=id
Hi Philippe,
You can indeed copy an index like that. The problem probably arises because
4.9.0 is using core discovery by default. This wiki page will shed some
light:
https://wiki.apache.org/solr/Core%20Discovery%20%284.4%20and%20beyond%29
Michael Della Bitta
Applications Developer
o: +1 646
Just in case someone else runs into this post, I think the following two
URLs have me sorted:
http://techkites.blogspot.com/2014/06/performance-tuning-and-optimization-for.html
http://www.cloudera.com/content/cloudera-content/cloudera-docs/Search/1.1.0-beta2/Cloudera-Search-User-Guide/csug_tuning
I forgot to mention in the previous post that I changed the analysis engine
from
/org/apache/uima/desc/OverridingParamsExtServicesAE.xml
to
/org/apache/uima/desc/AggregateSentenceAE
In doing so, I forgot the '.xml' extension, which is what was causing the
error. It would be helpful if the error
And a comparison to Elasticsearch would be helpful, since ES gets a lot of
mileage from their super-easy JSON support. IOW, how much of the ES
"advantage" is eliminated.
-- Jack Krupansky
-Original Message-
From: Noble Paul
Sent: Monday, August 25, 2014 1:59 PM
To: solr-user@lucene.a
Hi Jack,
I uploaded the code for a friend here
http://www.solrfromscratch.com/2014/08/20/embedded-documents-in-solr/ [it
is not the latest code, i will update it in a couple of hours ]
Multilevel nesting is supported,
in case of arrays e.g
personalities_json:[
{id:5},
{id:3}
]
initial
The simplest use case is to dump the entire json using split=/&f=/** . i am
planning to add an alias for the same (SOLR-6343) .
The nested docs is missing now and we will need to add it. A ticket needs
to be opened
On Mon, Aug 25, 2014 at 6:45 AM, Jack Krupansky
wrote:
> Thanks, Erik, but... I
Hi,
From the discussion it is not clear if this is a fixable bug in the case of
documents being in different shards. If this is fixable could someone please
direct me to the part of the code so that I could investigate.
Thanks.
Alex.
-Original Message-
From: Andrew Shumway
To
On 8/25/2014 4:23 AM, Jakov Sosic wrote:
> we ended up using cron to restart Tomcats every 7 days, each solr node
> per day... that way we avoid GC pauses.
>
> Until we figure things out in our dev environment and test GC
> optimizations, we will keep it this way.
If it's only doing a long GC paus
SOLR-6304 flattens a single JSON object into a single Solr document. See
Noble’s blog http://searchhub.org/2014/08/12/indexing-custom-json-data/ which
states:
split : This parameter is required if you wish to transform the input
JSON . This is the path at which the JSON must be split .
I added default="true" to my updateRequestProcessorChain:
Now I'm getting errors when running the DIH:
ERROR org.apache.solr.core.SolrCore – org.apache.solr.common.SolrException:
org.apache.uima.resource.ResourceInitializationException
at
org.apache.solr.uima.processor.UIMAUpdateReque
Hi Vivek,
how about this?
Iterator iter = queryResponse.getResults().iterator();
while (iter.hasNext()) {
SolrDocument resultDoc = iter.next();
Collection content = resultDoc.getFieldValues("discussions");
}
On Monday, August 25, 2014 4:55 PM, Vivekanand Ittigi
wrote:
Hi,
Hi,
I am making some performance tests with a backup index from one week ago.
For these tests I use a newly provisioned infrastructure identical to my
production environment.
As my production collections have 2 shards each, I begin the test putting
each backup shard in a different host so that I
Hi,
I've multivalued field and i want to display all array elements using solrj
command.
I used the command mentioned below but i'm able to retrieve only 1st
element of the array.
response.getResults().get(0).getFieldValueMap().get("discussions")
Output: Creation Time - 2014-06-12 17:37:53.0
NO
Thanks, Erik, but... I've read that Jira several times over the past month,
it is is far too cryptic for me to make any sense out of what it is really
trying to do. A simpler approach is clearly needed.
My perception of SOLR-6304 is not that it indexes a single JSON object as a
single Solr doc
Jack et al - there’s now this, which is available in the any-minute release of
Solr 4.10: https://issues.apache.org/jira/browse/SOLR-6304
Erik
On Aug 25, 2014, at 5:01 AM, Jack Krupansky wrote:
> That's a completely different concept, I think - the ability to return a
> single field v
Interesting. First, an apology for an error in my e-book - it says that the
enablePositionIncrements parameter for the stop filter defaults to "false",
but it actually defaults to "true". The question mark represents a "position
increment". In your case you don't want position increments, so add
To be honest, I'm not precisely sure what Google is really doing under the
hood since there is no detailed spec publically available. We know that
quotes do force a phrase searchin Google, but do they disable stemming or
preserve case and special characters? Unknown. Although, my PERCEPTION of
Hi,
Thanks for your reply.
I thought that google search work the same (quotes stand for exact match).
Example for my demands:
Objects:
- test host
- test_host
-test $host
-test-host
When I'll search for test host I'll get all above results.
When I'll search for "test host" Ill get only test
That's a completely different concept, I think - the ability to return a
single field value as a structured JSON object in the "writer", rather than
simply "loading" from a nested JSON object and distributing the key values
to normal Solr fields.
-- Jack Krupansky
-Original Message-
On 08/19/2014 04:58 PM, Shawn Heisey wrote:
On 8/19/2014 3:12 AM, Jakov Sosic wrote:
Thank you for your comment.
How did you test these settings? I mean, that's a lot of tuning and I
would like to set up some test environment to be certain this is what
I want...
I included a section on tools
Hello,
is it possible to copy a collection created with SOLR 4.6.0 to a SOLR 4.9.0
server?
I have just copied a collection called 'collection3', located in
solr4.6.0/example/solr, to solr4.9.0/example/solr, but to no avail, because my
SOLR 4.9.0 Server's admin does not list it among the avai
Hi,
Have a look the 'data' directory in your solr_home.
.fdt and fdx. files are used to store the data of stored field. You can
consider the size of the other files as the size Solr uses for its index.
You can have a look to
http://lucene.apache.org/core/4_9_0/core/org/apache/lucene/codecs/luce
I have solr working for my stats pages. When I run the index I need to know
how much of the size occupied by solr is used for index and how much is
used for storing non indexed data
A valid search:
http://pastie.org/pastes/9500661/text?key=rgqj5ivlgsbk1jxsudx9za
An Invalid search:
http://pastie.org/pastes/9500662/text?key=b4zlh2oaxtikd8jvo5xaww
What weird I found is that the valid query has:
"parsedquery_toString": "+(url_words_ngram:\"twitter com zer0sleep\")"
And the invali
26 matches
Mail list logo