I get done what I'm trying to do?
thnx,
Christoph
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
? Int/long to Int/long?
thnx,
Christoph
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Solr Community,
I'm Christoph Schmidt (http://www.moresophy.com/de/management), CEO of the
german company moresophy GmbH.
My Solr Wiki name is:
- ChristophSchmidt
We are working with Lucene since 2003 and Solr 2012 and are building linguistic
token filters and plugins for Sol
pdating and serving queries, and what query load per-collection and total
query load do you need to design for?
-- Jack Krupansky
-Original Message-
From: Christoph Schmidt
Sent: Monday, September 1, 2014 3:50 AM
To: solr-user@lucene.apache.org
Subject: AW: Scaling to large Number o
We already reduced the -Xss256k.
How could we reduce the size of the transaction log? By less autoCommits? Or
could it be cleaned up?
Thanks
Christoph
-Ursprüngliche Nachricht-
Von: Shalin Shekhar Mangar [mailto:shalinman...@gmail.com]
Gesendet: Sonntag, 31. August 2014 20:12
An
Yes, this would help us in our scenario.
-Ursprüngliche Nachricht-
Von: Jack Krupansky [mailto:j...@basetechnology.com]
Gesendet: Sonntag, 31. August 2014 18:10
An: solr-user@lucene.apache.org
Betreff: Re: Scaling to large Number of Collections
We should also consider "lightly-sharded" c
ing a new "user-id" field and combining user
collections.
Best
Christoph
-Ursprüngliche Nachricht-
Von: Erick Erickson [mailto:erickerick...@gmail.com]
Gesendet: Sonntag, 31. August 2014 18:04
An: solr-user@lucene.apache.org
Betreff: Re: Scaling to large Number of Collections
W
er of CPUs or twice or triple
of it? Native threads are restricted by the total virtual memory of the system
(at least in linux as far as I know). So the 10.000 threads, we use, is somehow
near the limit of the hardware, we have.
Christoph
-Ursprüngliche Nachricht-
Von: Ramkumar R. Aiy
Yes, we will think about this how to reorganise the application.
Thanks
Christoph
-Ursprüngliche Nachricht-
Von: Joseph Obernberger [mailto:joseph.obernber...@gmail.com]
Gesendet: Sonntag, 31. August 2014 16:58
An: solr-user@lucene.apache.org
Betreff: Re: Scaling to large Number of
Is there a Jira task for this?
Thanks
Christoph
-Ursprüngliche Nachricht-
Von: Mark Miller [mailto:markrmil...@gmail.com]
Gesendet: Sonntag, 31. August 2014 14:24
An: solr-user
Betreff: Re: Scaling to large Number of Collections
> On Aug 31, 2014, at 4:04 AM, Christoph Schm
direction of "large number of collections"? And the
question is, what is a "large number"?
Best
Christoph
-Ursprüngliche Nachricht-
Von: Jack Krupansky [mailto:j...@basetechnology.com]
Gesendet: Sonntag, 31. August 2014 14:09
An: solr-user@lucene.apache.org
Betreff:
s for discussion and help
Christoph
___
Dr. Christoph Schmidt | Geschäftsführer
P +49-89-523041-72
M +49-171-1419367
Skype: cs_moresophy
christoph.schm...@moresophy.de<mailto:heiko.be...@moresophy.de>
www.moresophy.com<http://www.m
Looks like it works. No crashes and the logs states it was added. I
didn't test against acutal data, though.
04.02.2010 17:14:13
org.apache.solr.handler.extraction.ExtractingRequestHandler inform
INFO: Adding Date Format: -MM-dd HH:mm:ss
04.02.2010 17:14:13
org.apache.solr.handler.extraction.E
Good job Mark, works fine and does not keep my files open.
Thanks,
Chris
Am 03.02.2010 15:24, schrieb Mark Miller:
> Hey Christoph,
>
> Could you give the patch at
> https://issues.apache.org/jira/browse/SOLR-1744 a try and let me know
> how it works out for you?
Cool, this way it's no longer crashing.
Thanks and Regards,
Chris
Am 04.02.2010 14:29, schrieb Mark Miller:
> Before you file a JIRA issue:
>
> I don't believe this is a bug, so there is likely no need for JIRA. Try
> putting the date.formats snipped in the defaults section rather than
> simply
Hi list,
I'm using the ExtractingRequestHandler to extract content from
documents. It's extracting the "last_modified" field quite fine, but of
course only for documents where this field is set. If this field is not
set I want to pass the file system timestamp of the file.
I'm doing:
final Conte
lose the stream it gets?
>From 08b158c28ebee618ac65defe731cbbc4954977b2 Mon Sep 17 00:00:00 2001
From: Christoph Brill
Date: Tue, 2 Feb 2010 13:39:26 +0100
Subject: [PATCH] [Bugfix] Close streams after sending them
---
.../client/solrj/impl/CommonsHttpSolrServer.java |9 +
1 files ch
();
}
}
This isn't exactly beauty code but at least it works this way. Would be
great if someone would come up with a better idea for solrj 1.5
Regards,
Chris
Am 02.02.2010 13:27, schrieb Christoph Brill:
> Hi list,
>
> I'm using ContentStreamUpdateRequest.addFile(File)
Hi list,
I'm using ContentStreamUpdateRequest.addFile(File) to index a bunch of
documents. This works fine unless the stream created in addFile doesn't
seem to get closed. This causes issues because my process has to many
open files.
It's a bug, right?
Regards,
Chris
Hi list,
I tried to add the following to my solrconfig.xml (to the
'
-MM-dd
which is described on the wiki page of the ExtractingRequestHandler[1].
After doing so I always get a ClassCastException once the lazy init of
the handler is happening. This is a stock solr 1.4 with no
modifica
Hi,
I use the DIH with RDBMS for indexing a large mysql database with
about 7 mill. entries.
Full index is working fine, in schema.xml I implemented a uniqueKey
field (which is of the type 'text').
I start queries with the dismax query handler, and get my results as
an php array.
Now, s
arch for Be*n returns 1 hit
I can't explain why, does anybody have a clue?
Version informations
Solr Implementation Version: 1.2.0 - Yonik - 2007-06-02 17:35:12
Lucene Implementation Version: build 2007-05-20
org.apache.solr.request.StandardRequestHandler version: $Revision:
542679 $
Thanks for any help,
Christoph
22 matches
Mail list logo