For the ideal, never give up, fighting!
On Thu, Dec 19, 2013 at 10:30 AM, xie kidd wrote:
> Hi all,
>
> When i try to set up a email data source as
> http://wiki.apache.org/solr/MailEntityProcessor , connect timeout
> Exception happened. i am sure the user and password is correct, and the
> r
Hello All,
I have a problem as described below and would like to have your opinion:
I have multiple documents with same unique id(by setting overwrite filed as
false). Let's say I have three documents (Doc1, Doc2, Doc3) and all are
having same unique id. I can search one of any of the three docume
Hi Dave,
Sorry for the delayed reply. Did you end up trying the (scary) caching
idea?
Yeah, there's no reasonable way today to access data from other fields from
the document in the analyzers. Creating an update request processor which
pulls the data prior to the field-by-field analysis and inj
Hi all
Is there any way to get the payloads for a qurey in Solr.
Lucene has a class PayloadSpanUtil that has a function called
getPayloadsForQuery that gets the payloads for terms that match. Is there
something similar in Solr ?
TIA
Puneet
Well, I haven't tested it - if it's not ready yet I will probably avoid
for now.
> On 12/19/2013 1:46 PM, Patrick O'Lone wrote:
>> If I was to use the LFU cache instead of FastLRU on the filter cache, if
>> I enable auto-warming on that cache type - does it warm the most
>> frequently used fq on t
On 12/19/2013 1:46 PM, Patrick O'Lone wrote:
> If I was to use the LFU cache instead of FastLRU on the filter cache, if
> I enable auto-warming on that cache type - does it warm the most
> frequently used fq on the filter cache? Thanks for any info!
I wrote that cache. It's a really really crappy
If I was to use the LFU cache instead of FastLRU on the filter cache, if
I enable auto-warming on that cache type - does it warm the most
frequently used fq on the filter cache? Thanks for any info!
--
Patrick O'Lone
Director of Software Development
TownNews.com
E-mail ... pol...@townnews.com
Ph
I implemented the PostFilter approach described by Joel. Just iterating
over the OpenBitSet, even without the scaling or the HashMap lookup, added
30ms to a query time, which kinda surprised me. There were about 150K hits
out of a total of 500K. Is OpenBitSet the best way to do this?
Thanks,
Peter
Hey Andrea! thanks for answering, this is the complete stack trace is following
below. (the other is just the same):
I'm going to try that modification of the logging level but i'm really
considering to debug tika and try to correct it myself.
03:38:23ERRORSolrCoreorg.apache.solr.common.Solr
I have a lot of problem with the stability of my cloud.
To improve the stability:
- Move zookeeper to another disk, the I/O from solr.home can kill your ensemble.
- Raise the zkTimeoutLimit to 60s
- Don't use a very big heap if you don't need, try with values around 4g and
increase until OOM
I would make one *collection* for each date range and then make a
collection alias or aliases that span the ones that you want to query.
http://wiki.apache.org/solr/SolrCloud#Collection_Aliases
I don't have a good idea for you for how to handle indexing off-cluster,
however.
Michael Della Bitta
On 12/19/2013 3:44 AM, ilay raja wrote:
> I have deployed solr cloud with external zookeeper ensemble (5
> instances). I am running solr instances on two servers with single shard
> index. There are 6 replicas. I often see solr going down during high search
> load (or) whenever i run indexing doc
Are you using a NRT solution, how often do you commit? We see similar
issues with PeerSync, but then we have a very active NRT system and we
soft-commit sub-second, so since PeerSync has a limit of 100 versions
before it decides its too much to do, if we try and PeerSync whilst
indexing is running
Roman, do you have any results?
created SOLR-5561
Robert, if I'm wrong, you are welcome to close that issue.
On Mon, Dec 9, 2013 at 10:50 PM, Isaac Hebsh wrote:
> You can see the norm value, in the "explain" text, when setting
> debugQuery=true.
> If the same item gets different norm before/a
created SOLR-5560
On Tue, Dec 10, 2013 at 8:48 AM, William Bell wrote:
> Sounds like a bug.
>
>
> On Mon, Dec 9, 2013 at 1:16 PM, Isaac Hebsh wrote:
>
> > If so, can someone suggest how a query should be escaped (securely and
> > correctly)?
> > Should I escape the quote mark (and backslash ma
On Thu, Dec 19, 2013 at 4:14 PM, ilay raja wrote:
> Hi,
>
> I have deployed solr cloud with external zookeeper ensemble (5
> instances). I am running solr instances on two servers with single shard
> index. There are 6 replicas. I often see solr going down during high search
> load (or) wheneve
That's a feature of the standard tokenizer. You'll have to use a field type
which uses the white space tokenizer to preserve special characters.
-- Jack Krupansky
-Original Message-
From: suren
Sent: Thursday, December 19, 2013 10:56 AM
To: solr-user@lucene.apache.org
Subject: Not abl
Unable to query strings ending with special characters, it is skipping the
last special character and giving the results. I am including the string in
double quotes.
For example i am unable to query strings like "JOHNSON &", "PEOPLES'".
It queries well for "JOHNSON & SONS", "PEOPLES' SELF-HELP"
Sounds pretty weird. I would use 4.5.1. Don’t know that it will address this,
but it’s a very good idea.
This doesn’t sound like a feature to me. I’d file a JIRA issue if it seems like
a real problem.
Are you using the old style solr.xml with cores defined in it or the new core
discovery mode
Sounds like you need to raise your ZooKeeper connection timeout.
Also, make sure you are using a concurrent garbage collector as a side note -
stop the world pauses should be avoided. Just good advice :)
- Mark
On Dec 18, 2013, at 5:48 AM, Anca Kopetz wrote:
> Hi,
>
> In our SolrCloud clust
On 12/19/2013 2:35 AM, hariprasadh89 wrote:
> We have done the solr cloud setup:
> In one machine
> 1. Centos 6.3
> 2. Apache solr 4.1
> 3. JbossasFinal 7.1.1
> 4 .ZooKeeper
> Lets setup the zookeeper cloud on 2 machines
>
> download and untar zookeeper in /opt/zookeeper directory on both servers
Hi Nagendra,
really cool topic.
I'm really interested in discover more information about the three
similraties algorithm you offer ( Term Similarity, Document Similiraty and
Term In Document Similarity).
I was looking for more details and explanations behind your Ranking
Algorithm.
Where could i st
In order to size the PriorityQueue, the result window size for the query is
needed. This has been computed in the SolrIndexSearcher and available in:
QueryCommand.getSupersetMaxDoc(), but doesn't seem to be available for the
PostFilter in either the SolrParms or SolrQueryRequest. Is there a way to
> Hi Josip
Hi Liu,
> that's quite weird, to my experience highlight is strict on string field
> which needs a exact match, text fields should be fine.
>
> I copy your schema definition and do a quick test in a new core,
> everything
> is default from the tutorial, and the search component is
> us
Have you tried to reindex using DocValues? Fields used for faceting are
stored on disk and not on ram using the FieldCache. If you have enough
memory they will be loaded on the system cache but not on the java heap.
This is good for GC too when committing.
http://wiki.apache.org/solr/DocValues
-
Hi!
Thanks for all the advice! I finally did it, the most annoying error
that took me the best of a day to figure out was that the state
variable here had to be reset:
https://bitbucket.org/dermotte/liresolr/src/d27878a71c63842cb72b84162b599d99c4408965/src/main/java/net/semanticmetadata/lire/solr/
Hi
Is it possible to read configuration properties inside Solr
For eg.i have a property file
F:\solr\example\solr\collection1\conf\test.properties within which I have
lots of key,
value entries.
Is there a way to read this file using a relative path and use it inside a
custom function
Thanks &
Hi,
I have deployed solr cloud with external zookeeper ensemble (5
instances). I am running solr instances on two servers with single shard
index. There are 6 replicas. I often see solr going down during high search
load (or) whenever i run indexing documents. I tried tuning hardcommit
(kept as
Hi Sandra,
I'm not sure if your problem is same as ours, but we encountered the same
issue on our Solr 4.2, the major memory usage was due to
CompressingStoredFieldsReader and GC became crazy.
In our context, we have some stored fields and for some documents the
content of the text field could be
We have done the solr cloud setup:
In one machine
1. Centos 6.3
2. Apache solr 4.1
3. JbossasFinal 7.1.1
4 .ZooKeeper
Lets setup the zookeeper cloud on 2 machines
download and untar zookeeper in /opt/zookeeper directory on both servers
solr1 & solr2. On both the servers do the following
root@sol
On Thu, Dec 19, 2013 at 10:01 AM, Charlie Hull wrote:
> On 18/12/2013 09:03, Alexandre Rafalovitch wrote:
>
>> Charlie,
>>
>> Does it mean you are talking to it from a client program? Or are you
>> running Tika in a listen/server mode and build some adapters for standard
>> Solr processes?
>>
>
>
Hi
I have a cluster of 14 nodes (7 shards, 2 replicas). each node with 6gb jvm.
solr 4.3.0
i have 400 million docs in the cluster, each node around 60gb of index.
I index new docs each night, around a million a night.
As the index started to grow, i started having problems of OutOfMmemory
when q
On 18/12/2013 09:03, Alexandre Rafalovitch wrote:
Charlie,
Does it mean you are talking to it from a client program? Or are you
running Tika in a listen/server mode and build some adapters for standard
Solr processes?
If we're writing indexers in Python we usually run Tika as a server -
which
Hi,
Thank you for the link, it does not seem to be the same problem ...
Best regards,
Anca
On 12/18/2013 11:41 PM, Furkan KAMACI wrote:
Hi Anca;
Could you check the conversation at here:
http://lucene.472066.n3.nabble.com/ColrCloud-IOException-occured-when-talking-to-server-at-td4061831.html
34 matches
Mail list logo