Done see:
https://issues.apache.org/jira/browse/SOLR-3541
On 12-6-2012 18:39, Sami Siren wrote:
On Tue, Jun 12, 2012 at 4:22 PM, Thijs wrote:
Hi
I just checked out and build solr&lucene from branches/lucene_4x
I wanted to upgrade my custom client to this new version (using solrj).
ated the other libs from the libs in /solr/dist/solrj-lib
However, when I wanted to run my client I got exceptions indicating that
I was missing the HTTPClient jars. (httpclient, htpcore,httpmime)
Shouldn't those go into lucene/solr/dist/solrj-lib as wel?
Do I need to create a ticket for this?
Thijs
3.x. But if not
we'll probably go live on 4.x
Thijs
On 17-10-2011 11:46, Kai Gülzau wrote:
Nobody?
SOLR-139 seems to be the most popular issue but I don’t think this will be
resolved in near future (this year). Right?
So I will try SOLR-2272 as a workaround, split up my documents in "
Done
https://issues.apache.org/jira/browse/SOLR-2824
On 12-10-2011 0:47, Chris Hostetter wrote:
: I have the following query
: /core1/select?q=*:*&fq={!join from=id to=childIds
fromIndex=core2}specials:1&fl=id,name
...
: org.apache.solr.search.JoinQParserPlugin$1.parse(JoinQParserPlugi
Hi
Can someone help me confirm this. Or should I create a ticket?
Thijs
On 7-10-2011 10:10, Thijs wrote:
Hi
I think I might have found a bug in the JoinQParser. But I want to
verify this first before creating a issue.
I have two cores with 2 different schema's
now I want to join be
rserPlugin.java:60)
the parse is called for the filterquery on the main core (core1). Not
the core of the 'fromIndex' (core2)
Should this work? Am I doing something wrong? Or do the different cores
have to have the same schema?
I'm using latest t
Where do you get your Lucene/Solr downloads from?
[] ASF Mirrors (linked in our release announcements or via the Lucene website)
[X] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
[X] I/we build them from source via an SVN/Git checkout.
[] Other (someone in your company mirr
hines are going
to connect to the same storage and are all running active solr instances.
Thijs
On 8-10-2010 11:58, Peter Sturge wrote:
Hi,
We've used iSCSI SANs with 6x1TB 15k SAS drives RAID10 in production
environments, and this works very well for both reads and writes. We
ultiple CommonsHttpSolrServer's with the
BinairyRequestWriter set, greatly improved our throughput. As it reduced
the CPU load on both the machine that gathered the documents as the
machine running the Solr server.
Thijs
Otis
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene
Hi.
Our hardware department is planning on moving some stuff to new machines
(on our request)
They are suggesting using virtualization (some CISCO solution) on those
machines and having the 'disk' connected via ISCSI.
Does anybody have experience running a SOLR index on a ISCSI drive?
We have
The streaming won't use the 'set' Requestwriter. It uses a custom xml
requestwriter embedded in the StreamingUpdateSolrServer.
I was also hoping it would use a BinaryRequestWriter but after digging
it turned-out not to.
On 1-7-2010 15:25, Jan Høydahl / Cominvent wrote:
Hi,
I had the impre
Sorry I missed it in the solrconfig.xml (my bad). I wasn't looking for
it in the right place.
Thijs
On 27-5-2010 6:41, Chris Hostetter wrote:
: So now I wonder why BinaryRequestWriter (and BinaryUpdateRequestHandler)
: aren't turned on by default. (eps considering some threads
27;t easy as it's
hardly mentioned on the Wiki. I'll see if I can update this.
Thanks for all the advise and esp the great work on Solr&Lucene.
Thijs
On 20-5-2010 21:34, Chris Hostetter wrote:
: StreamingUpdateSolrServer already has multiple threads and uses multiple
: connect
Why would I need faster hardware if my current hardware isn't reaching
it's max capacity?
I'm already using a different machine for querying and indexing so while
indexing the queries aren't affected. Pulling an optimized snapshot
isn't even noticeable on the query-ma
I already have a blockingqueue in place (that's my custom queue) and
luckily I'm indexing faster then what your doing.Currently it takes
about 2hour to index the 5m documents I'm talking about. But I still
feel as if my machine is under utilized.
Thijs
On 20-5-2010 17:16, Na
ing? Because I
have a feeling that my machine is capable of doing more (use more
cpu's). I just can't figure-out how.
Thijs
7;t just plugin a 'empty' slave that knows where it's
master is and have it pull in all the required cores and indexes.
Thijs
On 8-12-2009 14:25, Joe Kessel wrote:
Hi,
In my environment I create cores on the fly, then replicate the core to all of the slaves. I first
cre
But the slave never gets the message that a core is created...
at least not in my setup...
So it never starts replicating...
On 8-12-2009 12:13, Noble Paul നോബിള് नोब्ळ् wrote:
On Tue, Dec 8, 2009 at 2:43 PM, Thijs wrote:
Hi
I need some help setting up dynamic multicore replication.
We
On Tue, Dec 8, 2009 at 2:43 PM, Thijs wrote:
Hi
I need some help setting up dynamic multicore replication.
We are changing our setup from a replicated single core index with multiple
document types, as described on the wiki[1], to a dynamic multicore setup.
We need this so that we can display f
licating those to?
Thanks in advance.
Thijs
[1]
http://wiki.apache.org/solr/MultipleIndexes#Flattening_Data_Into_a_Single_Index
I haven't had time to actually ask this on the list my self but seeing
this, I just had to reply. I was wondering this myself.
Thijs
On 23-10-2009 5:50, R. Tan wrote:
Hi,
Is it possible to collapse the results from multiple fields?
Rih
solrHome, to
being relative to the working directory?
Do I have to set this manually to the correct directory?
Thijs
I actually am looking for the same answer.
I have worked around it by indexing 'empty' fields with a dumpy value
but this isn't an ideal situation
Thijs
On 11/19/08 10:38 PM, Geoffrey Young wrote:
Lance Norskog wrote:
Try: Type:blue OR -Type:[* TO *]
You can't ha
ot of work to iterate over al variations in a DocSet just to
get the few unique products.
But, what I understand from you anwser is that the best way to get the 3
unique products is to iterate over the 1000 variations in the result
DocSet? And if that is the case I'm happy with it.
Thanks
rates over all the
documents in the result docset (SimpleFacet.getFieldCacheCounts line 259).
But if this is the only way, then ok.
Thnx
Thijs
Ryan McKinley wrote:
On Apr 27, 2008, at 7:50 AM, Thijs Vonk wrote:
What is the best way to get the unique terms from a field in a result?
I
ontain database Id's that I use on the client side to get
aditional information from the database.
Is there a faster way to get the unique values from a field in a result?
Thijs
ever called but handleRequestBody is
Thijs
Mar 21, 2008 4:31:20 PM org.apache.solr.core.SolrCore execute
INFO: null q=* 0 15
This is my (temp) config for the firstSeacher:
*:*
What I'm I doning wrong, because it looks like SearchHandler.inform(..)
is never called but handleRequestBody is
SolrParams never used to set them.
Thijs
Thijs schreef:
I'm running into a problem where the calls to SolrQuery.getStart(),
SolrQuery.getRows() always return null
I'm using trunk of 1.3
I think I also found the problem.
If I use SolrQuery.setRows(20), the value is set in the LinkedH
return "field+(param==null?"":param);
//return "f."+field+'.'+param;
}
it works and getStart en getRows returns the values previously set.
I'm not sure this is the correct solution, could someone have a look and
if ok, commit it to the codebase?
Thanks
Thijs
29 matches
Mail list logo