Am 27.02.2014 08:04, schrieb Shawn Heisey:
On 2/26/2014 11:22 PM, Thomas Scheffler wrote:
I am one developer of a repository framework. We rely on the fact, that
"SolrJ generally maintains backwards compatibility, so you can use a
newer SolrJ with an older Solr, or an older SolrJ with a newer So
On 2/26/2014 11:42 PM, Chandan khatua wrote:
> I have the bellow query in data-config.xml, but it throws an error while
> running the delta query: "java.sql.SQLSyntaxErrorException: ORA-00918:
> column ambiguously defined".
These are the FIRST two hits that I got when I searched for your full
err
On 2/26/2014 11:22 PM, Thomas Scheffler wrote:
> I am one developer of a repository framework. We rely on the fact, that
> "SolrJ generally maintains backwards compatibility, so you can use a
> newer SolrJ with an older Solr, or an older SolrJ with a newer Solr." [1]
>
> This statement is not even
you could just add a field with default value NOW in schema.xml, for example
On Wed, Feb 26, 2014 at 10:44 PM, pratpor wrote:
> Is it possible to know the indexing time of a document in solr. Like there
> is
> a implicit field for "score" which automatically gets added to a document,
> is t
None that I know of, but you can easily have a date field with default
set to NOW. Or you can have an UpdateRequestProcessor that adds it in:
http://lucene.apache.org/solr/4_6_1/solr-core/org/apache/solr/update/processor/TimestampUpdateProcessorFactory.html
Regards,
Alex
Personal website: http:
Is it possible to know the indexing time of a document in solr. Like there is
a implicit field for "score" which automatically gets added to a document,
is there a field that stores value of indexing time?
Thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/Know-indexing
Hi
I have the bellow query in data-config.xml, but it throws an error while
running the delta query: "java.sql.SQLSyntaxErrorException: ORA-00918:
column ambiguously defined".
Full data import is running fine.
Kindly suggest the changes required.
Thanking you,
-Ch
Hello,
We are facing some kinda weird problem. So here is the scenario:
We have a frontend and a middle-ware which is dealing with user input search
queries before posting to Solr.
So when a user enters city:Frankenthal_(Pfalz) and then searches, there is
no result although there are fields on s
Hi,
I am one developer of a repository framework. We rely on the fact, that
"SolrJ generally maintains backwards compatibility, so you can use a
newer SolrJ with an older Solr, or an older SolrJ with a newer Solr." [1]
This statement is not even true for bugfix releases like 4.6.0 -> 4.6.1.
Hi Jack,
Ya, the requirement is like that. I also want to apply various filters on
the field like shingle, pattern replace etc. That is why I am using the
text field. (But for the above run these filters were not enabled)
The facet count is set as 10 and the unique terms can go into thousands.
Thanks, Jack. I will file a jira then. What are the generic ways to
improve/tune a solr query if we know its expensive? Does the analysis page
help with this at all?
On Wed, Feb 26, 2014 at 3:39 PM, Jack Krupansky wrote:
> I don't recall seeing anything related to passing the debug/debugQuery
>
You could try forcing things to go through function queries (via pseudo-fields):
fl=field(id), field(myfield)
If you're not requesting any stored fields, that *might* currently
skip that step.
-Yonik
http://heliosearch.org - native off-heap filters and fieldcache for solr
On Mon, Feb 24, 2014
On Feb 26, 2014, at 5:24 PM, Joel Cohen wrote:
> he's told me that he's doing commits in his SolrJ code
> every 1000 items (configurable). Does that override my Solr server settings?
Yes. Even if you have configured autocommit - explicit commits are explicit
commits that happen on demand. Ge
The bf parameter adds the value of a function query to the document store.
Your example did not include a bf parameter.
-- Jack Krupansky
-Original Message-
From: Ing. Andrea Vettori
Sent: Wednesday, February 26, 2014 12:26 PM
To: solr-user@lucene.apache.org
Subject: Search score prob
Just send the queries to Solr in parallel using multiple threads in your
application layer.
Solr can handle multiple, parallel queries as separate, parallel requests,
but does not have a way to bundle multiple queries on a single request.
-- Jack Krupansky
-Original Message-
From: s
There is an existing Solr admin service to do that, which is what the Solr
Admin UI uses to support that feature:
For example:
curl
“http://localhost:8983/solr/analysis/field?analysis.fieldname=features&analysis.fieldvalue=Hello+World.&indent=true”
There are some examples in the next (unpubl
Check out org.apache.solr.schema.IndexSchema#readSchema(), which uses
org.apache.solr.schema.FieldTypePluginLoader to parse analyzers.
On Feb 26, 2014, at 7:00 PM, Software Dev wrote:
> Can anyone point me in the right direction. I'm trying to duplicate the
> functionality of the analysis requ
Can anyone point me in the right direction. I'm trying to duplicate the
functionality of the analysis request handler so we can wrap a service
around it to return the terms given a string of text. We would like to read
the same schema.xml file to configure the analyzer,tokenizer, etc but I
can't se
I don't recall seeing anything related to passing the debug/debugQuery
parameters on for inter-node shard queries and then add that to the
aggregated response (if debug/debugQuery was specified.) Sounds worth a
Jira.
-- Jack Krupansky
-Original Message-
From: KNitin
Sent: Wednesday,
Are you sure you want to be faceting on a text field, as opposed to a string
field? I mean, each term (word) from the text will be a separate facet
value.
How many facet values do you typically returning?
How many unique terms occur in the facet field?
-- Jack Krupansky
-Original Message
Thanks, Shawn. I will try to upgrade solr soon
Reg firstSearcher: I think it does nothing now. I have configured to use
ExternalFileLoader but there the external file has no contents. Most of the
queries hitting the collection are expensive and tail queries. What will be
your recommendation to war
I use a SolrJ-based client to query Solr and I have been trying to construct
HTTP requests where facet name/value pairs are excluded. The web interface I
am working with has a refine further functionality, which allows excluding
one or more facet values. I have 3 facet fields: domain, content type
Hi Greg,
Thanks for the info. But the scenario in link is little bit different from
my requirement.
Regards,
On Wed, Feb 26, 2014 at 4:46 PM, Greg Walters wrote:
> I don't have much experience with faceting and its best practices though
> I'm sure someone else on here can pipe up to address y
Hi there
I have a few very expensive queries (atleast thats what the QTime tells
me) that is causing high CPU problems on a few nodes. Is there a way where
I can "trace" or do an "explain" on the solr query to see where it spends
more time? More like profiling on a per sub query basis?
I have t
I read that blog too! Great info. I've bumped up the commit times and
turned the ingestion up a bit as well. I've upped hard commit to 5 minutes
and the soft commit to 60 seconds.
${solr.autoCommit.maxTime:30}
false
${solr.autoSoftCommit.maxTime:6}
Done, created under SolrCloud component, couldn't find a more
appropriate like Server - Java or something, hope it has all the info
needed, I could contribute to it sometime next week, waiting for new PC
parts from Amazon to have a proper after work dev environment.
Regards,
Guido.
On 26/02/
I don't have much experience with faceting and its best practices though I'm
sure someone else on here can pipe up to address your questions there. In the
mean time have you read
http://sbdevel.wordpress.com/2013/04/16/you-are-faceting-itwrong/?
On Feb 26, 2014, at 3:26 PM, David Miller wrot
Hi,
We want to send parallel queries(2-3 queries) in the same request from
client to Solr. How to send the parallel queries from client side(using
Solrj).
Thanks.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Parallel-queries-to-Solr-tp4119959.html
Sent from the Solr -
Hi Greg,
Yes, the memory and cpu spiked for that machine. Another issue I found in
the log was "SolrException: Too many values for UnInvertedField faceting on
field".
I was using the fc method. Will changing the method/params help?
One thing I don't understand is that, the query was returning onl
Thanks Guido - any chance you could file a JIRA issue for this?
- Mark
http://about.me/markrmiller
On Feb 26, 2014, at 6:28 AM, Guido Medina wrote:
> I think it would need Guava v16.0.1 to benefit from the ported code.
>
> Guido.
>
> On 26/02/14 11:20, Guido Medina wrote:
>> As notes also st
Thanks Shalin, that code might be helpful... do you know if there is a
reliable way to line up the ranges with the shard numbers? When the problem
occurred we had 80 million documents already in the index, and could not
issue even a basic 'deleteById' call. I'm tempted to assume they are just
assig
Thanks Timothy,
I gave these a try and -XX:+CMSPermGenSweepingEnabled seemed to cause the
error to happen more quickly. With this option on it didn't seemed to do
any intermittent garbage collecting that delayed the issue in with it off.
I was already using a max of 512MB, and I can reproduce it w
Hi Josh,
Try adding: -XX:+CMSPermGenSweepingEnabled as I think for some VM versions,
permgen collection was disabled by default.
Also, I use: -XX:MaxPermSize=512m -XX:PermSize=256m with Solr, so 64M may be
too small.
Timothy Potter
Sr. Software Engineer, LucidWorks
www.lucidworks.com
___
I notice that in Solr 4.6.1 CollapsingQParserPlugin is slower than standard
Solr field grouping. I have a Solr index of 1 docs, with a signature
field which is a Solr dedup field of the doc content. Majority of the
signatures are unique.
With standard Solr field grouping,
http://loca
We are using the Bitnami version of Solr 4.6.0-1 on a 64bit windows
installation with 64bit Java 1.7U51 and we are seeing consistent issues
with PermGen exceptions. We have the permgen configured to be 512MB.
Bitnami ships with a 32bit version of Java for windows and we are replacing
it with a 64bi
IIRC faceting uses copious amounts of memory; have you checked for GC activity
while the query is running?
Thanks,
Greg
On Feb 26, 2014, at 1:06 PM, David Miller wrote:
> Hi,
>
> I am encountering an issue where Solr nodes goes down when trying to obtain
> facets on a text field. The cluster
Hi,
I am encountering an issue where Solr nodes goes down when trying to obtain
facets on a text field. The cluster consists of a few servers and have
around 200 million documents (small to medium). I am trying the faceting
first time on this field and it gives a 502 Bad Gateway error along with
s
Hi;
As Daniel mentioned it is just for "first time" and not a suggested
approach. However if you follow that way you can assign shards to machines.
On the other hand you can not change it after a time later with same
procedure.
Thanks;
Furkan KAMACI
2014-02-26 15:53 GMT+02:00 Daniel Collins :
I'm afraid I have to manually retreive all docs for suggested query in
current filter (category:Cars&q=Renau) and count them to get the frequency
in given filter.
2014-02-26 19:09 GMT+01:00 Hakim Benoudjit :
> It seems that suggestion frequency stays the same with filter query (fq).
>
>
> 2014-0
It seems that suggestion frequency stays the same with filter query (fq).
2014-02-26 19:05 GMT+01:00 Ahmet Arslan :
>
>
> Just a guess, what happens when you use filter query?
> fq=category:Cars&q=Renau
>
>
>
> On Wednesday, February 26, 2014 7:38 PM, Hakim Benoudjit <
> h.benoud...@gmail.com> w
Just a guess, what happens when you use filter query? fq=category:Cars&q=Renau
On Wednesday, February 26, 2014 7:38 PM, Hakim Benoudjit
wrote:
I mean that: I want suggestions frequency to count only document in current
query (solr 'q'). My issue is even if suggestion 'word' is correct; the
f
I mean that: I want suggestions frequency to count only document in current
query (solr 'q'). My issue is even if suggestion 'word' is correct; the
frequency is relative to all index and not only to the current query.
Suppose that I have 'q = category:Cars', in this case, if my searched query
is 'R
Hi, I'm new to Solr and I'm trying to understand why I don't get what I want
with the bf parameter.
The query debug information follows.
What I don't understand is why the result of the bf parameter is so low in
score compared to matched fields.
Can anyone help ?
Thank you
0
19
tr
Hi,
What do you mean by "suggestions only for current category" ? Do you mean that
suggested word(s) should return non-zero hits for that category?
Ahmet
On Wednesday, February 26, 2014 6:36 PM, Hakim Benoudjit
wrote:
@Jack Krupansky, here is the important portion of my solrconfig.xml:
@Jack Krupansky, here is the important portion of my solrconfig.xml:
default
title
solr.DirectSolrSpellChecker
internal
0.5
2
1
5
4
0.01
As you guess 'title' field is the one I'm searching & the one I'm building
my suggestions from.
@Ahmet Arsian: I
Hi Hakim,
According to wiki spellcheck.q is intended to use with 'spelling ready'
query/input.
'spelling ready' means it does not contain field names, AND, OR, etc.
Something like should work. spellcheck.q=value1 value2&q=+field1:value1
+field2:value2
Ahmet
On Wednesday, February 26, 2014 5:
Could you post the request URL and the XML/JSON Solr response? And the
solrconfig for both the query request handler and the spellcheck component.
Is your spell check component configured for both fields, field1 and field2?
-- Jack Krupansky
-Original Message-
From: Hakim Benoudjit
S
I have some difficulties to use `spellcheck.q` to get only suggestions for
current query.
When I set `spellcheck.q` to lucene query format (field1:value1 AND
field2:value2), it doesnt return me any result.
I have supposed that the value stored in `spellcheck.q` is just the value
of ``spellcheck`
February 2014, Apache Solr™ 4.7 available
The Lucene PMC is pleased to announce the release of Apache Solr 4.7
Solr is the popular, blazing fast, open source NoSQL search platform
from the Apache Lucene project. Its major features include powerful
full-text search, hit highlighting, faceted searc
This is only true the *first* time you start the cluster. As mentioned
earlier, the correct way to assign shards to cores is to use the collection
API. Failing that, you can start cores in a determined order, and the
cores will assign themselves a shard/replica when they first start up.
From tha
Shalin,
Great,Thanks for the clear explanation. let me try to make my scoring
function as part of QueryResultKey.
Thanks & Regards,
Senthilnathan V
On Wed, Feb 26, 2014 at 5:40 PM, Shalin Shekhar Mangar <
shalinman...@gmail.com> wrote:
> The problem here is that your custom scoring function
If you have 15 shards and assuming that you've never used shard
splitting, you can calculate the shard ranges by using new
CompositeIdRouter().partitionRange(15, new
CompositeIdRouter().fullRange())
This gives me:
[8000-9110, 9111-a221, a222-b332,
b333-c443, c444000
Hi,
I have a small problem using function queries. According to
http://wiki.apache.org/solr/FunctionQuery#Date_Boosting and
http://wiki.apache.org/solr/SolrRelevancyFAQ#How_can_I_boost_the_score_of_newer_documents
I've tried using function queries to boost newer documents over older
ones. For
The problem here is that your custom scoring function (is that a
SearchComponent?) is not part of a query. The query cache is defined
as SolrCache where the QueryResultKey contains
Query, Sort, SortField[] and filters=List. So your custom
scoring function either needs to be present in the QueryResu
Ah, I didn't know that this is possible with DocTransformers. This is
also possible in Solr 4.7 (to be released soon) by using
shards.info=true in the request.
On Wed, Feb 26, 2014 at 2:32 PM, Ahmet Arslan wrote:
> Hi,
>
> I think with this : https://wiki.apache.org/solr/DocTransformers#A.5Bshard
I think it would need Guava v16.0.1 to benefit from the ported code.
Guido.
On 26/02/14 11:20, Guido Medina wrote:
As notes also stated at concurrentlinkedhashmap v1.4, the performance
changes were ported to Guava (don't know to what version to be
honest), so, wouldn't be better to use MapMake
As notes also stated at concurrentlinkedhashmap v1.4, the performance
changes were ported to Guava (don't know to what version to be honest),
so, wouldn't be better to use MapMaker builder?
Regards,
Guido.
On 26/02/14 11:15, Guido Medina wrote:
Hi,
I noticed Solr is using concurrentlinkedha
Hi,
I noticed Solr is using concurrentlinkedhashmap v1.2 which is for Java
5, according to notes at
https://code.google.com/p/concurrentlinkedhashmap/ version 1.4 has
performance improvements compared to v1.2, isn't Solr 4.x designed
against Java 6+? If so, wouldn't it benefit from v1.4?
Re
Hi,
> Don't run multiple instances of Solr on one machine. Instead, run one
> instance per machine and create the collection with the maxShardsPerNode
> parameter set to 2 or whatever value you need.
Ok.
> Yet another whole separate discussion: You need three physical nodes for
> a redundant z
> There is a round robin process when assigning nodes at cluster. If you want
> to achieve what you want you should change your Solr start up order.
Well that is just weird. To bring a cluster to a reproducible state, I have to
bring the whole cluster down, and start it up again in a specific ord
Thanks iorixxx,
SolrQuery parameters = new SolrQuery();
parameters.set("q","*:*");
parameters.set("fl","Id,STATE_NAME,[shard]");
parameters.set("distrib","true");
QueryResponse response = server.query(parameters);
It's working fine now.
--
View this message in context:
http://lucene.472066.n3
Hi,
I think with this : https://wiki.apache.org/solr/DocTransformers#A.5Bshard.5D
Ahmet
On Wednesday, February 26, 2014 10:36 AM, search engn dev
wrote:
I have setup solr cloud of two shards and two replicas. I am using solrj for
communicating with solr. We are using CloudSolrServer for sea
I have setup solr cloud of two shards and two replicas. I am using solrj for
communicating with solr. We are using CloudSolrServer for searching in solr
cloud. below is my code
String zkHost =
"host1:2181,host1:2182,host1:2183,host1:2184,host1:2185";
CloudSolrSe
Hi Dmitry,
Thanks for your feedback. Couple of inline responses below.
On Mon, Feb 24, 2014 at 4:43 AM, Dmitry Kan wrote:
> Hello!
>
> Just few random points:
>
> 1. Interesting site. I'd say there are similar sites, but this one has
> cleaner interface. How does your site compare to this one,
Hi Erick,
thank you for the reply.
Yes, I'm using the fast vector highlighter (Solr 4.3). Every request should
only deliver 10 results.
Here is my schema configuration on both field:
65 matches
Mail list logo