On Fri, Dec 23, 2011 at 11:36 AM, Shyam Bhaskaran
wrote:
> Hi,
>
> Can someone suggest me on performing Solr Profiling.
Have you looked at JMX: http://wiki.apache.org/solr/SolrJmx ?
Regards,
Gora
Hi,
Can someone suggest me on performing Solr Profiling.
We are seeing performance issues and using the debug flag it seems the
highlighting component is causing the overhead.
I wanted to find out what is causing the overhead in highlighting for certain
queries.
I assume IO or CPU is causing t
hi all,
I have a object like this
public class Link {
private long id;
private string url;
// other 20 property
private int vote; //summary of vote, for sorting used
}
so when I index document, my Lucene document also contain all field from my
Link object. e.g
doc_id = 1
url = "solr.org
Hi Jean,
I am also looking into Profiling Solr and wanted to check with you whether
you were able to use YourKit successfully for Solr Profiling and were you
able to find out the bottleneck with your situation.
Can you share how you were able to find out the performance bottleneck and
fix the iss
Being new to xml/xslt/solr, I am hoping someone can explain/help me with the
following:
Using Apache-Solr 3.4.0 . I have a php page for submitting the search, and
display the result in html. I indexed a 1.5MB size pdf document (400
pages). Using the admin interface with *:* query everything i
thanks, that's what I had thought. Wasn't sure if there was a benefit
either way.
On Fri, Dec 16, 2011 at 3:29 PM, Mark Miller wrote:
> On Fri, Dec 16, 2011 at 8:14 AM, Jamie Johnson wrote:
>
>> What is the most appropriate way to configure Solr when deploying in a
>> cloud environment? Should
Sure, but what about inappropriate stemming in one language that
happens to match something in another?
In general, putting multiple languages into a single field usually
only makes sense when the
overwhelming number of documents are in one language...
Best
Erick
On Thu, Dec 22, 2011 at 2:41 PM,
This turned out to be SOLR-2986.
-Original Message-
From: Scott Smith [mailto:ssm...@mainstreamdata.com]
Sent: Thursday, December 22, 2011 1:24 PM
To: solr-user@lucene.apache.org
Subject: Exception in Solr server on "more like this"
I've been trying to get "More like this" running under
Hi,
Is it possible to sort fields or facets using custom Collator ? I found only
one solution for fields solr.CollationKeyFilterFactory filter but there are
some problems with this solution. First problem is, the solution doesn't
work with facet sorting. Second problem is that it is additional fiel
Hi,
Is it possible to sort fields or facets using custom Collator ? I found only
one solution for fields solr.CollationKeyFilterFactory filter but there are
some problems with this solution. First problem is, the solution doesn't
work with facet sorting. Second problem is that it is additional fiel
On 12/22/2011 4:39 AM, Dean Pullen wrote:
> Yeh the drop index via the URL command doesn't help anyway - when rebuilding
> the index the timestamp is obviously ahead of master (as the slave is being
> created now) so the replication will still not happen.
If you deleted the index and create the
I've been trying to get "More like this" running under solr 3.5. I get the
Exception below. The http request is also highlighted below.
I've looked at the FieldType code and I don't understand what's going on there.
So, while I know what a null pointer exception means, it isn't telling me what
Hi Erick,
Why querying would be wrong?
It is my understanding that if I have let say 3 docs and each of them has been
indexed with its own language stemmer, then sending a query will search all
docs and return matching results? Let say if a query is "driving" and one of
the docs has drive a
You're probably hitting the default limit on a field. This is set in
solrconfig.xml,
the element. The first thing I'd try is upping that to, say,
1000 reindex and see if that fixes your problem. This is the number
of *tokens*, not characters. Roughly the number of words...
Searching for the c
Not really. And it's hard to make sense of how this would work in practice
because stemming the document (even if you could) because that's only
half the battle.
How would querying work then? No matter what language you used
for your stemming, it would be wrong for all the documents that used a
di
Hi Zoran,
These numbers are all pretty small, so you will be fine even with a pair of
"average servers" - it looks like everything will fit in RAM even if you have
only 2 GB of it.
245 QPS is not trivial, but with everything in RAM I believe even on modest
hardware you will be just fine.
Otis
On 12/21/2011 4:13 AM, Thomas Fischer wrote:
I'm trying to move forward with my solr system from 1.4 to 3.5 and ran into
some problems with solr home.
Is this a known problem?
My solr 1.4 gives me the following messages (amongst many many others…) in
catalina.out:
INFO: No /solr/home in JNDI
On 12/21/2011 9:43 AM, Chantal Ackermann wrote:
Hi Shawn,
maybe the requests that fail have a certain pattern - for example that
they are longer than all the others.
The query for the exception I sent is shown in the pastebin. Here is
the query and for reference, the pastebin URL:
did:(286
On Thu, Dec 22, 2011 at 7:02 AM, Zoran | Bax-shop.nl <
zoran.bi...@bax-shop.nl> wrote:
> Hello,
>
> What are (ballpark figure) the hardware requirement (diskspace, memory)
> SOLR will use i this case:
>
>
> * Heavy Dutch traffic webshop, 30.000 - 50.000 visitors a day
>
Unique users doesn
Thanks everyone! That was very helpful.
-Ahmed
On Thu, Dec 22, 2011 at 5:15 AM, Chantal Ackermann <
chantal.ackerm...@btelligent.de> wrote:
>
> Hi Ahmed,
>
> if you have a multi core setup, you could change the file
> programmatically (e.g. via XML parser), copy the new file to the
> existing one
Hello,
What are (ballpark figure) the hardware requirement (diskspace, memory) SOLR
will use i this case:
* Heavy Dutch traffic webshop, 30.000 - 50.000 visitors a day
* Visitors relying heavily on the search engine of the site
o 3.000.000 - 5.000.000 searches a day
*
Thanks Erik . i seen that , how it work with slop after making few operations
:) .
so i am happy with this now. but still i have one issue , when i do search i
also need to show highlighting on that field, setting positionIncrementGap
to 0, and then when i make phrase search . it does not return
Le 21/12/2011 23:49, Koji Sekiguchi a écrit :
(11/12/21 22:28), Tanguy Moal wrote:
Dear all,
[...]
I tried using both legacy highlighter and FVH but the same issue occurs.
The issue only triggers when relying on hl.q.
Thank you very much for any help,
--
Tanguy
Tanguy,
Thank you for re
positionIncrementGap is only really relevant for phrase searches. For
non-phrase searches you can effectively ignore it.
The problem here is what you mean by "consecutive element". In your
original example, if you mean that searching for "michael singer" should
NOT match, then you want to use phra
1) the number of documents for a given date range R1 that do not have
a value for the validToDate, i.e. the 99% of the documents
Makes no sense either. "for a given date range R1 that don't have a value".
You can't specify a range for a document that doesn't have a value!
I think you're asking for
Hi Guys,
I probably found a way to mime the delta import for the fileEntityProcessor
( I have used it for xml files ... )
Adding this configuration in the xml-data-config :
And using command :
*command=full-import&clean=false*
*
*
Solr adds to the index only the files that were changed from the
Hi Ahmed,
if you have a multi core setup, you could change the file
programmatically (e.g. via XML parser), copy the new file to the
existing one (programmatically, of course), then reload the core.
I haven't reloaded the core programmatically, yet, but that should be
doable via SolrJ. Or - if y
We're simply restoring the master via a backed up snapshot (created using the
ReplicationHandler) and then trying to get the slave to replicate it.
On 21 Dec 2011, at 18:09, Erick Erickson wrote:
> You can't. But index restoration should be a very rare thing,
> or you have some lurking problem i
Yeh the drop index via the URL command doesn't help anyway - when rebuilding
the index the timestamp is obviously ahead of master (as the slave is being
created now) so the replication will still not happen.
On 21 Dec 2011, at 16:37, Dean Pullen wrote:
> I can't see a way, if the slave is on
yes,
I see that my question was a bit confusing. But thanks for your answers.
I will try to clarify a bit.
I query on a date field, validToDate. The value for this field is not present
for 99% of the documents.
What I would like to get is
1) the number of documents for a given date range R1 t
30 matches
Mail list logo