Hi Roy,
Unless your servers are maxed out, I'd do with:
1. Set up Solr 4.0 on the same servers... or just 1 (additional even)
2. Reindex to Solr 4.0 or use SolrEntityProcessor if all fields from 3.6
index are stored
3. Add the super secret, for your eyes only 21st client with Solr 3.6 and
point i
I, too, was going to point out to the number of threads, but was going to
suggest using fewer of them because the server has 32 cores and there was a
mention of 100 threads being used from the client. Thus, my guess was that
the machine is busy juggling threads and context switching (how's vmstat
We've considered using AWS Beanstalk (hmm, what's the difference between
AWS auto scaling and elastic beanstalk? not sure.) for search-lucene.com ,
but the idea of something adding and removing nodes seems scary. The
scariest part to me is automatic removal of wrong nodes that ends up in
data loss
On 3 January 2013 05:55, Mark Miller wrote:
>
> 32 cores eh? You probably have to raise some limits to take advantage of
> that.
>
> https://issues.apache.org/jira/browse/SOLR-4078
> support configuring IndexWriter max thread count in solrconfig
>
> That's coming in 4.1 and is likely important - t
You did not include the stack trace. Oops.
Try using fewer threads with the concurrent uploader, or use the
single-threaded one.
On 01/01/2013 03:55 PM, uwe72 wrote:
the problem occurrs when i add a lot of values to a multivalue field. id i
add just a few, then it works.
this is the full sta
On Wed, Jan 2, 2013 at 9:21 PM, davers wrote:
> So by providing the correct replicationFactor parameter for the number of
> servers has fixed my issue.
>
> So can you not provide a higher replicationFactor than you have live_nodes?
> What if you want to add more replicants to the collection in the
This is what I get from the leader overseer log:
2013-01-02 18:04:24,663 - INFO [ProcessThread:-1:PrepRequestProcessor@419]
- Got user-level KeeperException when processing sessionid:0x23bfe1d4c280001
type:create cxid:0x58 zxid:0xfffe txntype:unknown reqpath:n/a
Error Path:/overseer E
Unfortunately, for 4.0, the collections API was pretty bare bones. You don't
actually get back responses currently - you just pass off the create command to
zk for the Overseer to pick up and execute.
So you actually have to check the logs of the Overseer to see what the problem
may be. I'm wor
Thanks Mikhail.
I will have a look at the RequestHandlerBase.
Arcadius.
On 2 January 2013 12:22, Mikhail Khludnev wrote:
> Arcadius,
>
> It can be easily achieved by extending RequestHandlerBase and implementing
> straightforward looping through other request handlers via
> solrCore.getReques
32 cores eh? You probably have to raise some limits to take advantage of that.
https://issues.apache.org/jira/browse/SOLR-4078
support configuring IndexWriter max thread count in solrconfig
That's coming in 4.1 and is likely important - the default is only 8.
You might always want to experiment
Solr 4.0?
I think there is a JIRA and fix for this in for 4.1.
- Mark
On Dec 28, 2012, at 7:20 AM, Marcin Rzewucki wrote:
> Hi,
>
> I found in logs that sometimes the following error (more lines at the end
> of this mail) occurs on Solr startup or core reload:
>
> Dec 28, 2012 8:42:01 AM
> o
Any chance you can hook up to a node with something like visual vm and sample
some method calls or something?
- Mark
On Jan 2, 2013, at 10:15 AM, Markus Jelsma wrote:
> Hi,
>
> We have two clusters running on similar machines equipped with SSD's. One
> runs a 6 month old trunk check out and
Don't use zkRun, just zkHost. zkRun is for internal.
- Mark
On Jan 2, 2013, at 6:44 PM, davers wrote:
> When I try to start a solr server in my solr cloud I am receiving the error:
>
> SEVERE: null:org.apache.solr.common.SolrException:
> org.apache.zookeeper.server.quorum.QuorumPeerConfig$Conf
On Jan 2, 2013, at 5:51 PM, Bill Au wrote:
> Is anyone running Solr 4.0 SolrCloud with AWS auto scaling?
>
> My concern is that as AWS auto scaling add and remove instances to
> SolrCloud, the number of nodes in SolrCloud Zookeeper config will grow
> indefinitely as removed instances will never
When I try to start a solr server in my solr cloud I am receiving the error:
SEVERE: null:org.apache.solr.common.SolrException:
org.apache.zookeeper.server.quorum.QuorumPeerConfig$ConfigException: Error
processing /srv/solr//zoo.cfg
But I don't understand why I would need zoo.cfg in solr/home whe
Hi,
I like the best of both worlds:
Mask some specials like "C++" to "cplusplus" or "C#" to "csharp" ...
Tokenize an identify on unicode whitespaces and charsets
Well known splitter for composed words
Perfect superset of
or the ISOLatin1AccentFilterFactory because it can han
Hi,
I took solr4.0 code from lucene_solr_branch_4x and set up in Eclipse. I am
using Tomcat 7 server in Eclipse. I am getting start-up error, although Solr
comes up correctly. I do not see this error in Solr 3.6 start up time.
INFO: Starting service Catalina
Jan 2, 2013 12:17:02 PM org.apache
Well, after hours of fighting with this, I decided to turn on replication
between the production and development cores and use curl commands to
disable replication to the development core and only run replication after
the script updates the development db, doing a sleep 5m, and then disable
replic
Am 02.01.2013 22:39, schrieb Uwe Reh:
To get an idea whats going on, I've done some statistics with visualvm.
(see attachement)
"merde" the listserver stripes attachments.
You'll find the screen shot at
>http://fantasio.rz.uni-frankfurt.de/solrtest/HotSpot.gif
uwe
Mladen,
FYI I just committed this to 4.x:
https://issues.apache.org/jira/browse/SOLR-4230
~ David
mladen micevic wrote
> Hi,
> I went through example for spatial search in Solr 4.0
> (http://wiki.apache.org/solr/SolrAdaptersForLuceneSpatial4)
> Both indexing and searching work fine.
>
> Examp
Hi,
while trying to optimize our indexing workflow I reached the same
endpoint like gabriel shen described in his mail. My Solr server won't
utilize more than 40% of the computing power.
I made some tests, but i'm not able to find the bottleneck. Could
anybody help to solve this quest?
At fi
Anyother ideas to resolve this issue would be really helpful.
Thanks
Kalyan
Thanks,
Kalyan Manepalli
-Original Message-
From: Manepalli, Kalyan [mailto:kalyan.manepa...@orbitz.com]
Sent: Friday, December 28, 2012 9:30 PM
To: solr-user@lucene.apache.org
Subject: Re: Override wt parameter
Speaking from experience: if you are using bigrams for CJK, do not highlight.
The results will look very wrong to someone who knows the language.
Even with a dictionary-based tokenizer, you'll need a client dictionary for
local terms.
wunder
On Jan 2, 2013, at 10:51 AM, Tom Burton-West wrote:
On 1/2/2013 11:48 AM, Shawn Heisey wrote:
Additional note: If you are already using the newer server objects in
SolrJ 3.6 (HttpSolrServer in most cases) you might be able to drop the
v4 SolrJ jar into your 3.6 SolrJ app and have it continue to work.
You could give that a try before you even up
Hello all,
What are the best practices for setting up the highlighter to work with CJK?
We are using the ICUTokenizer with the CJKBigramFilter, so overlapping
bigrams are what are actually being searched. However the highlighter seems
to only highlight the first of any two overlapping bigrams. i
On 1/2/2013 11:34 AM, Benjamin, Roy wrote:
Thanks all,
You should not use a a 3.6 SolrJ client with Solr 4 server.
I run 100 shards and 20 clients. If above is correct then the entire
system must be shut down for many hours for an upgrade...
Using SolrJ 3.6 against a Solr 4 server will *prob
Thanks all,
>> You should not use a a 3.6 SolrJ client with Solr 4 server.
I run 100 shards and 20 clients. If above is correct then the entire
system must be shut down for many hours for an upgrade...
Thanks
Roy
-Original Message-
From: Tomás Fernández Löbbe [mailto:tomasflo...@gmail.c
AFAIK Solr 4 should be able to read Solr 3.6 indexes. Soon those files will
be updated to 4.0 format and will not be readable by Solr 3.6 anymore. See
http://wiki.apache.org/lucene-java/BackwardsCompatibility
You should not use a a 3.6 SolrJ client with Solr 4 server.
Tomás
On Wed, Jan 2, 2013 a
Indexes will not work???
I've upgraded a 3.6 index. I used the new Solrconfig from 4.0, and had
to do some hacking to my schema (e.g. add a version field) to make it
work, but once I'd done that, all was fine.
As I understand it, any Lucene instance can understand the format of an
index from one
Indexes will not work. I have not heard of an index upgrader. If you run
your 3.6 and new 4.0 Solr at the same time, you can upload all the data
with a DataImportHandler script using the SolrEntityProcessor.
How large are your indexes? 4.1 indexes will not match 4.0, so you will
have to upload
Will the existing 3.6 indexes work with 4.0 binary ?
Will 3.6 solrJ clients work with 4.0 servers ?
Thanks
Roy
Jack Krupansky-2 wrote
> Do you have any soft commits ble.com.
I don't think so, especially since the Live production core and replicated
production core are only 7 documents apart. I haven't used the copied
development core at all, nor do I ever use the replicated production core
since it serves
Hi Alan,
I noticed that issue but i'm using today's check out.
Thanks,
Markus
-Original message-
> From:Alan Woodward
> Sent: Wed 02-Jan-2013 16:30
> To: solr-user@lucene.apache.org
> Subject: Re: CPU spikes on trunk
>
> Hi Markus,
>
> How recent a check-out from trunk are you run
Furthermore, if you plan to index "a lot" of data per application, and
you are using Solr 4.0.0+ (including Solr Cloud), you probably want to
consider creating a collection per application instead of a core per
application.
On 1/2/13 2:38 PM, Erick Erickson wrote:
This is a common approach to
Hi Markus,
How recent a check-out from trunk are you running? I added a bunch of
statistics recording a few months back which we had to back out over Christmas
because it was causing memory leaks.
Alan Woodward
a...@flax.co.uk
On 2 Jan 2013, at 15:15, Markus Jelsma wrote:
> Hi,
>
> We have
Hi,
We have two clusters running on similar machines equipped with SSD's. One runs
a 6 month old trunk check out and another always has a very recent check out.
Both sometimes receive a few documents to index. The old cluster actually
processes queries.
We've seen performance differences befor
Do you have any soft commits (or commitWithin that is a soft commit) that
show up in queries but haven't yet been committed to disk? You have to do a
hard commit to flush those to disk.
-- Jack Krupansky
-Original Message-
From: UnConundrum
Sent: Wednesday, January 02, 2013 8:42 AM
T
I replicate from a live server to a backup server. That backup server is
also used for development, so every night by cron, and sometimes manually, I
execute the following script to update a development core on the backup
server:
date
echo "stopping mysql slave"
mysqladmin -u intranet -ppassword
This is a common approach to this problem, having separate
cores keeps the apps from influencing each other when it comes
to term frequencies & etc. It also keeps the chances of returning
the wrong data do a minimum.
As to how many cores can fit, "it depends" (tm). There's lots of
work going on ri
Arcadius,
It can be easily achieved by extending RequestHandlerBase and implementing
straightforward looping through other request handlers via
solrCore.getRequestHandler(name).handleRequest(req,resp).
I have no spare time to contribute it - it's about #10 in my TODO list.
I'm replying to mail li
Thanks for this valuable explanation.
It was very helpful.
Best Regards
Hardik Upadhyay
From: Per Steffensen [via Lucene]
[mailto:ml-node+s472066n4030004...@n3.nabble.com]
Sent: Wednesday, January 02, 2013 2:10 PM
To: Hardik Upadhyay
Subject: Re: Solr 4.0 NRT Search
On 1/1/13 2:07 PM, hupadh
On 1/1/13 2:07 PM, hupadhyay wrote:
I was reading a solr wiki located at
http://wiki.apache.org/solr/NearRealtimeSearch
It says all commitWithin are now soft commits.
can any one explain what does it means?
Soft commit means that the documents indexed before the soft commit will
become searcha
Hello!
Although not C, but C++ there is a project aiming at this -
http://code.google.com/p/solcpp/
However I don't know how usable that is, you can just make pure HTTP
calls and process the response.
--
Regards,
Rafał Kuć
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch - ElasticS
Hi All,
Is there any C API for Solr??
Thanks and regards,
Romita
44 matches
Mail list logo