On Tue, Dec 02, 2008 at 04:34:08PM -0500, Burton-West, Tom wrote:
> Hello all,
>
> As I understand distributed Solr, a request for a distributed search
> goes to a particular Solr instance with a list of arguments specifying
> the addresses of the shards to search. The Solr instance to which the
OK . I guess I see it. I am thinking of exposing the writes to the
properties file via an API.
say Context#persist(key,value);
This can write the data to the dataimport.properties.
You must be able to retrieve that value by ${dataimport.persist.}
or through an API, Context.getPersistValue(key
Hello,
I am trying to exclude certain records from my search results in my
query by specifying which ones I don't want back but its not working
as expected. Here is my query:
+message:test AND (-thread_id:123 OR -thread_id:456 OR -thread_id:789)
So basically I just want anything back that has th
We are using Solr and would like to know is there a query syntax to retrieve
the newest x records? in decending order.?
Our id field is simply that (unique id record identifier) so ideally we would
want to get the last say 100 records added.
Possible?
Also is there a special way it needs to be
I don't think it is. There is another C# client up on Google Code, but I'm not
sure how well that one is maintained...
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
> From: Grant Ingersoll <[EMAIL PROTECTED]>
> To: solr-user@lucene.apache.org
>
Hi Matthew, Hi Yonik,
...sorry for the flag .. didnt want to ...
Solr 1.3 / Apache 5.5
Data's directory size : 7.9G
I'm using jMeter to hit http request, I'm sending exactly the same on solr
and sphinx(mysql) by http either.
solr
http://test-search.com/test/selector?cache=0&backend=solr&reque
Do you mean the file used by dataimporthandler called dataimport.properties?
If you mean this one it's writen at the end of the indexing proccess. The
writen date will be used in the next indexation by delta-query to identify
the new or modified rows from the database.
What I am trying to do is
Hello all,
As I understand distributed Solr, a request for a distributed search
goes to a particular Solr instance with a list of arguments specifying
the addresses of the shards to search. The Solr instance to which the
request is first directed is responsible for distributing the query to
the o
Could you provide more information? How big is the index? How are you
searching it? Some examples might help pin down the issue.
How long are the queries taking? How long did they take on Sphinx?
Thanks for your time!
Matthew Runo
Software Engineer, Zappos.com
[EMAIL PROTECTED] - 702-943-7833
Anyone know the status of SolrSharp? Is it actively maintained?
Thanks,
Grant
On Tue, Dec 2, 2008 at 3:41 PM, Burton-West, Tom <[EMAIL PROTECTED]> wrote:
> Thanks Yonik,
>
> ->>The next nightly build (Dec-01-2008) should have the changes.
>
> The latest nightly build seems to be 30-Nov-2008 08:20,
> http://people.apache.org/builds/lucene/solr/nightly/
> has the version with
Thanks Yonik,
->>The next nightly build (Dec-01-2008) should have the changes.
The latest nightly build seems to be 30-Nov-2008 08:20,
http://people.apache.org/builds/lucene/solr/nightly/
has the version with the NIO fix been built? Are we looking in the
wrong place?
Tom
Tom Burton-West
Infor
Thanks Yonik
The main search still happens through SolrDispatchFilter so SolrQueryRequest is
getting closed implicitly.
But I do use direct api in following cases; So pl suggest any more possible
resource issues
1. update and commit;
core.getUpdateHanlder();
Here I close the updateHandl
First of all...
standard request handler uses the default search field specified in your
schema.xml -- dismax does not. dismax looks at the "qf" param to decide
which fields to search for the "q" param. if you started with the example
schema the dismax handler may have a default value for "q
I actually found the problem. Oracle returns the field name as "Capital".
On Tue, Dec 2, 2008 at 1:57 PM, Jae Joo <[EMAIL PROTECTED]> wrote:
> Hey,
>
> I am trying to connect the Oracle database and index the values into solr,
> but I ma getting the
> "Document [null] missing required field: id".
Hey,
I am trying to connect the Oracle database and index the values into solr,
but I ma getting the
"Document [null] missing required field: id".
Here is the debug output.
1
2
0
2008-12-02 13:49:35
−
Indexing completed. Added/Updated: 0 documents. Deleted 0 documents.
schema.xml
id
delta-import file?
On Wed, Dec 3, 2008 at 12:08 AM, Lance Norskog <[EMAIL PROTECTED]> wrote:
> Does the DIH delta feature rewrite the delta-import file for each set of
> rows? If it does not, that sounds like a bug/enhancement.
> Lance
>
> -Original Message-
> From: Noble Paul നോബിള് नो
Definitely, but it'll take me a few days. I'll also report findings on
SOLR-465. (I've been on holiday for a few weeks)
Noble Paul നോബിള് नोब्ळ् wrote:
>
> wojtek, you can report back the numbers if possible
>
> It would be nice to know how the new impl performs in real-world
>
>
>
--
Vi
Do you have a "dismaxrequest" request handler defined in your solr config xml?
Or is it "dismax"?
-Todd Feak
-Original Message-
From: tushar kapoor [mailto:[EMAIL PROTECTED]
Sent: Tuesday, December 02, 2008 10:07 AM
To: solr-user@lucene.apache.org
Subject: Encoded search string & qt=Dis
Does the DIH delta feature rewrite the delta-import file for each set of rows?
If it does not, that sounds like a bug/enhancement.
Lance
-Original Message-
From: Noble Paul നോബിള് नोब्ळ् [mailto:[EMAIL PROTECTED]
Sent: Tuesday, December 02, 2008 8:51 AM
To: solr-user@lucene.apache.org
wojtek, you can report back the numbers if possible
It would be nice to know how the new impl performs in real-world
On Tue, Dec 2, 2008 at 11:45 PM, Yonik Seeley <[EMAIL PROTECTED]> wrote:
> On Tue, Dec 2, 2008 at 1:10 PM, wojtekpia <[EMAIL PROTECTED]> wrote:
>> Is there a configurable way to sw
On Tue, Dec 2, 2008 at 1:10 PM, wojtekpia <[EMAIL PROTECTED]> wrote:
> Is there a configurable way to switch to the previous implementation? I'd
> like to see exactly how it affects performance in my case.
Thanks for the reminder, I need to document this in the wiki.
facet.method=enum (enumerate
Using embedded is always more error prone...you're probably forgetting
to close some resource.
Make sure to close all SolrQueryRequest objects.
Start with a memory profiler or heap dump to try and figure out what's
taking up all the memory.
-Yonik
On Tue, Dec 2, 2008 at 1:05 PM, Sunil <[EMAIL PRO
Is there a configurable way to switch to the previous implementation? I'd
like to see exactly how it affects performance in my case.
Yonik Seeley wrote:
>
> And if you want to verify that the new faceting code has indeed kicked
> in, some statistics are logged, like:
>
> Nov 24, 2008 11:14:32
Hi,
I am facing problems while searching for some encoded text as part of the
search query string. The results don't come up when I use some url encoding
with qt=dismaxrequest.
I am searching a Russian word by posting a URL encoded UTF8 transformation
of the word. The query works fine for normal
I have been facing this issue since long in production environment and wanted
to know if anybody came across can share their thoughts.
Appreciate your help.
Environment
2 GB index file
3.5 million documents
15 mins. time interval for committing 100 to 400 document updates
Commit happens once i
On Tue, Dec 2, 2008 at 12:04 PM, sunnyfr <[EMAIL PROTECTED]> wrote:
> How can I make it faster?
There's no "-go-faster-please" flag ;-)
Give us the exact URL and we might be able to help figure out what part is slow.
-Yonik
Hi,
I tested my old search engine which is sphinx and my new one which solr and
I've got a uge difference of result.
How can I make it faster?
Thanks a lot,
--
View this message in context:
http://www.nabble.com/Solr-1.3---response-time-very-long-tp20795134p20795134.html
Sent from the Solr - U
You can write the details to a file using a Transformer itself.
It is wise to stick to the public API as far as possible. We will
maintain back compat and your code will be usable w/ newer versions.
On Tue, Dec 2, 2008 at 5:12 PM, Marc Sturlese <[EMAIL PROTECTED]> wrote:
>
> Thanks I really apre
No, this issue is new. But there was a general PHPResponseWriter task...
I opened a new one: https://issues.apache.org/jira/browse/SOLR-892
Feel free to move / edit / merge it. ;) I hope I made the problem clear.
~ Steffen
Yonik Seeley wrote:
>
> On Tue, Dec 2, 2008 at 6:39 AM, Steffen B. <[EMA
On Tue, Dec 2, 2008 at 6:39 AM, Steffen B. <[EMAIL PROTECTED]> wrote:
> Little update: this behaviour can be easily reproduced with the example
> configuration that comes with Solr:
> After uncommenting line 733 in
> apache-solr-nightly/example/solr/conf/solrconfig.xml (which activates the
> PHPS q
Yeh, sorry was not clear in my question. Storage would end up being done
the same way of course
I guess I'm more looking for feedback about what people have used as a
strategy to handle this type of situation. This goes for faceting as well.
Assuming I do faceting by author and there is 2 a
Thanks I really apreciate your help.
I didn't explain myself so well in here:
> 2.-This is probably my most difficult goal.
> Deltaimport reads a timestamp from the dataimport.properties and
> modify/add
> all documents from db wich were inserted after that date. What I want is
> to
> be able to
Little update: this behaviour can be easily reproduced with the example
configuration that comes with Solr:
After uncommenting line 733 in
apache-solr-nightly/example/solr/conf/solrconfig.xml (which activates the
PHPS queryResponseWriter) loading this URL on the example index shows the
same proble
On Tue, Dec 2, 2008 at 3:01 PM, Marc Sturlese <[EMAIL PROTECTED]> wrote:
>
> Hey there,
>
> I have my dataimporthanlder almost completely configured. I am missing three
> goals. I don't think I can reach them just via xml conf or transformer and
> sqlEntitProcessor plugin. But need to be sure of th
Hi,
Before I start with Solr specific question, there is one thing I need to get
information on.
If I am a Russian user on a Russian Website & I want to search for indexes
having two Russian words how is the query term going to look like.
1. AND
or rather,
2 .
Now over to solr specific
Hey there,
I have my dataimporthanlder almost completely configured. I am missing three
goals. I don't think I can reach them just via xml conf or transformer and
sqlEntitProcessor plugin. But need to be sure of that.
If there's no other way I will hack some solr source classes, would like to
kno
no probs We can fix that using reflection. I shall give a patch w/ that.
Probably it is better to fix it in a Transformer
On Tue, Dec 2, 2008 at 1:56 PM, Joel Karlsson <[EMAIL PROTECTED]> wrote:
> True, but perhaps it works with java.sql.Clob as well, haven't tried it
> though.
>
> 2008/12/2 Nob
True, but perhaps it works with java.sql.Clob as well, haven't tried it
though.
2008/12/2 Noble Paul നോബിള് नोब्ळ् <[EMAIL PROTECTED]>
> cool
>
> The only problem is that java.sql.Clob#getCharacterStream() is package
> private and you have to use the oracle.sql.CLOB
>
>
>
> On Tue, Dec 2, 2008 a
cool
The only problem is that java.sql.Clob#getCharacterStream() is package
private and you have to use the oracle.sql.CLOB
On Tue, Dec 2, 2008 at 1:38 PM, Joel Karlsson <[EMAIL PROTECTED]> wrote:
> Thanks for your reply!
>
> I wrote such a transformer and now it seems to work perfectly. Here's
Thanks for your reply!
I wrote such a transformer and now it seems to work perfectly. Here's the
code for the transformer if anyone encounters the same problem, or if anyone
want to improve it:
import org.apache.solr.handler.dataimport.*;
import oracle.sql.CLOB;
import java.util.*;
import java.io
41 matches
Mail list logo