yeah, finally I did it by modifying the required solrDocumentList and using
it instead of DocList object as in Solr 1.2
Thanks
Pooja
On Fri, Jan 9, 2009 at 9:01 AM, Yonik Seeley wrote:
> On Thu, Jan 8, 2009 at 9:40 PM, Chris Hostetter
> wrote:
> > you have a custom response writer you had work
Did you add the uniqueKey to schema.xml after indexing? If not, you need to
re-index after changing the schema.
Solr/Lucene do not duplicate documents by itself. How are you indexing
documents to Solr? Did the example setup shipping with Solr work correctly
for you?
On Fri, Jan 9, 2009 at 12:27 P
On Fri, Jan 9, 2009 at 4:28 AM, wojtekpia wrote:
>
> What happens if I overlap the execution of my cron jobs? Do any of these
> scripts detect that another instance is already executing?
No, they don't.
>
> --
> View this message in context:
> http://www.nabble.com/Overlapping-Replication-Scr
Hi all,
I am using CommonsHttpSolrServer to add documents to Solr. Instead
of explicitly calling commit for every document I have configured
autocommit in solrconfig.xml. But how do we ensure that the document
added is successfully indexed/committed on Solr side. Is there any
callback mechanism
On Fri, Jan 9, 2009 at 12:59 AM, Qingdi wrote:
>
> Hi,
>
> I use solr 1.3 and I have two questions about spellcheck.
>
> 1) if my index docs are like:
>
> university1
> UNIVERSITY
>
>
> street1, city1
> LOCATION
>
> is it possible to build the spell check dictionary using field "NAME" but
> w
On Fri, Jan 9, 2009 at 3:03 PM, Gargate, Siddharth wrote:
> Hi all,
>I am using CommonsHttpSolrServer to add documents to Solr. Instead
> of explicitly calling commit for every document I have configured
> autocommit in solrconfig.xml. But how do we ensure that the document
> added is success
Rohit, I'd guess you don't have set to id in schema.xml.
Erik
On Jan 9, 2009, at 1:57 AM, rohit arora wrote:
Hi,
I have add one document only single time but the out put provided by
lucene give me
the same document multiple times..
If i specify rows=2 in out put same document wi
On Jan 9, 2009, at 12:28 AM, smock wrote:
I'm using 1.3 - are the nightly builds stable enough to use in
production?
Testing always recommended, and no official guarantees are made of
course, but trunk is vastly superior to 1.3 in faceting performance.
I'd use trunk (in fact I am) in pro
Thanks Shalin for the reply.
I am working with the remote Solr server. I am using autocommit instead
of commit method call because I observed significant performance
improvement with autocommit.
Just wanted to make sure that callback functionality is currently not
available in Solr.
Thanks,
Siddh
On Fri, Jan 9, 2009 at 4:20 PM, Gargate, Siddharth wrote:
> Thanks Shalin for the reply.
> I am working with the remote Solr server. I am using autocommit instead
> of commit method call because I observed significant performance
> improvement with autocommit.
> Just wanted to make sure that call
2009/1/8 Otis Gospodnetic
> Hello Mark,
>
> As for assigning different weight to fields, have a look at DisMax request
> handler -
> http://wiki.apache.org/solr/DisMaxRequestHandler#head-af452050ee272a1c88e2ff89dc0012049e69e180
>
Field boosting should solve this issue too, right?
>
>
> Otis
How do we set the maxDocs or maxTime for commit from the application?
Thanks,
Siddharth
-Original Message-
From: Shalin Shekhar Mangar [mailto:shalinman...@gmail.com]
Sent: Friday, January 09, 2009 4:34 PM
To: solr-user@lucene.apache.org
Subject: Re: Ensuring documents indexed by autoc
Sorry, for the previous question. What I meant was whether we can set
the configuration from the code. But what you were suggesting is that I
should call commit only after some time or after few number of
documents, right?
-Original Message-
From: Gargate, Siddharth [mailto:sgarg...@ptc.
On Fri, Jan 9, 2009 at 4:47 PM, Gargate, Siddharth wrote:
> But what you were suggesting is that I
> should call commit only after some time or after few number of
> documents, right?
Correct. If you are using Solrj client for indexing data, you can use the
SolrServer#add(Collection docs) metho
Shalin,
Just to remember that with he is indexing more documents that he has memory
avaiable, it is a good thing to have autocommit set.
2009/1/9 Shalin Shekhar Mangar
> On Fri, Jan 9, 2009 at 4:47 PM, Gargate, Siddharth
> wrote:
>
> > But what you were suggesting is that I
> > should call com
On Fri, Jan 9, 2009 at 5:00 PM, Alexander Ramos Jardim <
alexander.ramos.jar...@gmail.com> wrote:
> Shalin,
>
> Just to remember that with he is indexing more documents that he has memory
> avaiable, it is a good thing to have autocommit set.
Yes, sorry, I had assumed that he has enough memory o
Can you sample us the query you are doing and how you are indexing
documents?
2009/1/9 rohit arora
>
> Hi,
>
> I have add one document only single time but the out put provided by lucene
> give me
> the same document multiple times..
>
> If i specify rows=2 in out put same document will be 2 tim
Hey there,
I am stack in this problem sine 3 days ago and no idea how to sort it.
I am using the nighlty from a week ago, mysql and this driver and url:
driver="com.mysql.jdbc.Driver"
url="jdbc:mysql://localhost/my_db"
I can use deduplication patch with indexs of 200.000 docs and no problem.
Whe
Thanks again for your inputs.
But then I am still stuck on the question that how do we ensure that
document is successfully indexed. One option I see is search for every
document sent to solr. Or do we assume that autocommit always indexes
all the documents successfully?
Thanks,
Siddharth
-Or
Can you put the full log (as short as possibly demonstrates the
problem) somewhere where I can take a look? Likewise, can you share
your schema?
Also, does the spelling index exist under /data/index? If
you open it w/ Luke, does it have entries?
Thanks,
Grant
On Jan 8, 2009, at 11:30 P
I can't imagine why dedupe would have anything to do with this, other
than what was said, it perhaps is taking a bit longer to get a document
to the db, and it times out (maybe a long signature calculation?). Have
you tried changing your MySql settings to allow for a longer timeout?
(sorry, I'm
On Fri, Jan 9, 2009 at 12:18 AM, smock wrote:
> In some ways I have a 'small index' (~8 million documents at the moment).
> However, I have a lot of attributes (currently about 30, but I'm expecting
> that number to keep growing) and am interested in faceting across all of
> them for every search
Paul
I have looked at those, but want to learn how to do the easy things
first - as I posted below I can import example data and then search
against it. Data that I've tried to import seems to import, but I
can't search/find it, I want to know how to do this first, so if you
have any idea, I would
Otis
Thanks for your reply, I wrote out a long email explaining the steps I
took, and the results, but it was returned by the Solr-user email
server stamped as spam. I've put my note on pastebin, you can see it
here: http://pastebin.cryer.us/pastebin.php?show=m359e2e47
I'd appreciate any feedback
You do a commit in step 1 after the update, right? So if you configure Solr
on the indexer to invoke snapshooter after a commit and optimize, then you
would not need to invoke snapshooter explicitly using cron. snappuller
doesn't do anything unless there is a new snapshot on the indexer.
Bill
O
hey there,
I hadn't autoCommit set to true but I have it sorted! The error stopped
appearing after setting the property maxBufferedDocs in solrconfig.xml. I
can't exactly undersand why but it just worked.
Anyway, maxBufferedDocs is deprecaded, would ramBufferSizeMB do the same?
Thanks
Marc
Hi,
I am using solr admin page with index.jsp from
< <%-- $Id: index.jsp 686780 2008-08-18 15:08:28Z yonik $ --%>
I am getting these errors. Any insight will be helpful.
HTTP Status 500 - javax.servlet.ServletException:
java.lang.NoSuchFieldError: config org.apache.jasper.JasperException:
java
You were searching for "1899" which is the value of the "date" field in the
document you added. You need to specify q=date:1899 to search on the date
field.
You can also use the "" element in schema.xml to specify
the field on which you'd like to search if no field name is specified in the
query.
On Fri, Jan 9, 2009 at 9:23 PM, Marc Sturlese wrote:
>
> hey there,
> I hadn't autoCommit set to true but I have it sorted! The error stopped
> appearing after setting the property maxBufferedDocs in solrconfig.xml. I
> can't exactly undersand why but it just worked.
> Anyway, maxBufferedDocs
Hey Shalin,
In the begining (when the error was appearing) i had
32
and no maxBufferedDocs set
Now I have:
32
50
I think taht setting maxBufferedDocs to 50 I am forcing more disk writting
than I would like... but at least it works fine (but a bit slower,opiously).
I keep saying that the most
Your basically writing segments more often now, and somehow avoiding a
longer merge I think. Also, likely, deduplication is probably adding
enough extra data to your index to hit a sweet spot where a merge is too
long. Or something to that effect - MySql is especially sensitive to
timeouts when
hi,
I'm looking through the list archives and the documentation on boost
queries, and I don't see anything that matches this case.
I have an index of documents, some of which are very similar but not
identical. Therefore the scores are very close and the ordering is affected
by somewhat arbit
Hey Mark,
Sorry I was not enough especific, I wanted to mean that I have and I always
had autoCommit=false.
I will do some more traces and test. Will post if I have any new important
thing to mention.
Thanks.
Marc Sturlese wrote:
>
> Hey Shalin,
>
> In the begining (when the error was appeari
On Jan 9, 2009, at 12:56 PM, Eric Kilby wrote:
Each document has a multivalued field, with 1-n values in it (as
many as
20). The actual values don't matter to me, but the number of values
is a
rough proxy for the quality of a record. I'd like to apply a very
small
boost based on the numbe
On Jan 8, 2009, at 9:29 AM, Yevgeniy Belman wrote:
the response i get when executing only the following, produces no
facet
counts. It could be a bug.
facet.query=[price:[* TO 500], price:[500 TO *]
That's an invalid query. If you want two ranges, use two facet.query
parameters.
Hi,
Thanks for the help.
>If i'm understanding you correctly, you modified the MoreLikeThis class to
>include your new clause (using those two lines above) correct?
Yes.
The time field is a "long" and so is the range variables, so the
problem should not be related to that. If I construct the qu
Please bear with me. I am new to Solr. I have searched all the existing posts
about this and could not find an answer. I wanted to know how do I go about
creating a
SolrServer using EmbeddedSolrServer. I tried to initialize this several ways
but was unsuccesful. I do not have multi-core. I am us
: Thanks again for your inputs.
: But then I am still stuck on the question that how do we ensure that
: document is successfully indexed. One option I see is search for every
Have faith.
If the add completes successfully then the data made it to solr, was
indexed, and now lives in the index fil
http://issues.apache.org/jira/browse/NUTCH-442
Haven't used Nutch. Can the Nutch-generated index be reverse-engineered into
a Solr schema? In that case, you can just copy the Lucene index files away
from Nutch and run them under Solr.
Thanks Lance! I have no idea whether the Nuth-generated index could be
converted to Solr schema. I wonder what people are using this NUTCH-442 for
(http://issues.apache.org/jira/browse/NUTCH-442).
So what crawler do you use to generate index for Solr? Thanks a lot!!
On Fri, Jan 9, 2009 at 8:04 PM
The UUID field type is not documented on the Wiki.
https://issues.apache.org/jira/browse/SOLR-308
The ExtractingRequestHandler creates its own UUID instead of using the UUID
field type.
http://issues.apache.org/jira/browse/SOLR-284
I don't know about the Nutch format -> Solr schema idea either. The
NUTCH-442 system uses Solr for both indexing and searching, and uses Nutch
for only crawling.
At my last job we had a custom scripting system that crawled the front page
of over 5000 sites. Each site had a configured script. Yes,
On Jan 9, 2009, at 8:12 PM, qp19 wrote:
Please bear with me. I am new to Solr. I have searched all the
existing posts
about this and could not find an answer. I wanted to know how do I
go about
creating a
SolrServer using EmbeddedSolrServer. I tried to initialize this
several ways
but
2009/1/10 Lance Norskog
>
> I have used the rss format of the data input handler, and it works well but
> has problems with detecting errors etc. That is, it works well when it
> works
> but does not fail gracefully in a useful way.
>
>
Lance, some error handling logic was added after you describ
44 matches
Mail list logo