Unsubscribe
On Jul 7, 2015 11:39 AM, "Jacob Singh" wrote:
>
>
> --
> +1 512-522-6281
> twitter: @JacobSingh ( http://twitter.com/#!/JacobSingh )
> web: http://www.jacobsingh.name
> Skype: pajamadesign
> gTalk: jacobsi...@gmail.com
>
files. But even with all of these fancy
> options available, I'd still just use the alternate web.xml technique that
> Grant proposed.
>
> Erik
>
>
> On May 13, 2009, at 10:55 PM, Jacob Singh wrote:
>
>> HI Grant,
>>
>> That's not a bad
FYI: dist-war is in build.xml, not common-build.xml.
>
> -Grant
>
> On May 12, 2009, at 5:52 AM, Jacob Singh wrote:
>
>> Hi folks,
>>
>> I just wrote a Servlet Filter to handle authentication for our
>> service. Here's what I did:
>>
>>
Hi folks,
I just wrote a Servlet Filter to handle authentication for our
service. Here's what I did:
1. Created a dir in contrib
2. Put my project in there, I took the dataimporthandler build.xml as
an example and modified it to suit my needs. Worked great!
3. ant dist now builds my jar and inc
We rebooted a machine, and the permissions on the external drive where
the index was stored had changed. We didn't realize it immediately,
because searches were working and updates were not throwing errors
back to the client.
These ended up in catalina.out
Apr 22, 2009 11:57:12 PM org.apache.sol
try ulimit -n5 or something
On Mon, Apr 6, 2009 at 6:28 PM, Jarek Zgoda wrote:
> I'm indexing a set of 50 small documents. I'm adding documents in
> batches of 1000. At the beginning I had a setup that optimized the index
> each 1 documents, but quickly I had to optimize after adding
Hi TIA,
I have the same desired requirement. If you look up in the archives,
you might find a similar thread between myself and the always super
helpful Erik Hatcher. Basically, it can't be done (right now).
You can however use the "ExtractOnly" request handler, and just get
the extracted text
On Tue, Mar 24, 2009 at 5:52 AM, Jacob Singh wrote:
> If I'm using autocommit, and I have a crash of tomcat (or the whole
> machine) while there are still docs pending, will I lose those
> documents in limbo
Yep.
> If the answer is "they go away": Is there anyway
Hi,
If I'm using autocommit, and I have a crash of tomcat (or the whole
machine) while there are still docs pending, will I lose those
documents in limbo, or will I just be able to restart and then the
commit will run?
If the answer is "they go away": Is there anyway to ensure integrity
of an upd
Hi,
We ran into a weird one today. We have a document which is written in
German and everytime we make a query which matches it, we get the
following:
java.lang.StringIndexOutOfBoundsException: String index out of range: 2822
at java.lang.String.substring(String.java:1935)
at
or
Hi,
I'm trying to write some code to build a facet list for a date field,
but I don't know what the first and last available dates are. I would
adjust the gap param accordingly. If there is a 10yr stretch between
min(date) and max(date) I'd want to facet by year. If it is a 1 month
gap, I'd wan
*Jacob Singh feels dumb*
Thanks!
On Fri, Feb 13, 2009 at 9:14 PM, Shalin Shekhar Mangar
wrote:
> Jacob, the output of stats.jsp is an XML which you can consume in your
> program. It is transformed to html using XSL.
>
> On Fri, Feb 13, 2009 at 9:09 PM, Jacob Singh wrote:
pdate Handlers > status > docsPending.
>
> Koji
>
> Jacob Singh wrote:
>>
>> Hi,
>>
>> Is there a way to retrieve the # of documents which are pending commit
>> (when using autocommit)?
>>
>> Thanks,
>> Jacob
>>
>>
>
>
--
+1
Hi,
Is there a way to retrieve the # of documents which are pending commit
(when using autocommit)?
Thanks,
Jacob
--
+1 510 277-0891 (o)
+91 33 7458 (m)
web: http://pajamadesign.com
Skype: pajamadesign
Yahoo: jacobsingh
AIM: jacobsingh
gTalk: jacobsi...@gmail.com
0 is actually a communication failure (can't connect at all).
200 is good
Solr returns 400s when if bails. I always thought this was strange,
because I thought 500 is an application error (what I would expect)
and 400 is a general HTTP error.
Best,
J
On Tue, Feb 10, 2009 at 7:22 AM, Koji Sekig
eat
to show you.
Best,
Jacob
On Thu, Jan 29, 2009 at 1:16 PM, Mark Miller wrote:
> Jacob Singh wrote:
>>
>> Sorry if this is wrong place to ask since Solr Gaze is Lucid's
>> proejct, but I was trying to install this in a multicore environment,
>> and it doesn't
Sorry if this is wrong place to ask since Solr Gaze is Lucid's
proejct, but I was trying to install this in a multicore environment,
and it doesn't seem to be working.
It says to add the plugin to solr.home/lib.
Which solr.home? I got to /gaze and of course, it doesn't know where to look.
Thank
know even if it doesn't get fixed.
Best,
Jacob
On Fri, Jan 16, 2009 at 9:44 AM, Noble Paul നോബിള് नोब्ळ्
wrote:
> On Fri, Jan 16, 2009 at 7:14 PM, Jacob Singh wrote:
>> Hi Shalin,
>>
>> Sorry, my post was unlcear. I am calling snappull from the slave, I
>>
t;
> The master is showing indexversion as 0 because you haven't called commit on
> the master yet. Can you call commit and see if replication happens on the
> slave?
>
> On Fri, Jan 16, 2009 at 2:24 AM, Jacob Singh wrote:
>
>> Hi Shalin,
>>
>> Thanks fo
Hi,
How do I find out the status of a slave's index? I have the following scenario:
1. Boot up the slave. I give it a core name of boot-$CoreName.
2. I call boot-$CoreName/replication?command=snappull
3. I check back every minute using cron and I want to see if the slave
has actually gotten t
the output of /replication?command=indexversion on the master?
>
>
>
> On Fri, Jan 16, 2009 at 1:27 AM, Jacob Singh jacobsi...@gmail.com> wrote:
>
>
>
> > Hi folks,
>
> >
>
> > Here's what I've got going:
>
> >
>
> > Master
Hi folks,
Here's what I've got going:
Master Server with the following config:
commit
schema.xml,stopwords.txt,elevate.xml
Slave server with the following:
http://mydomain:8080/solr/065f079c24914a4103e2a57178164bbe/replication
00:00:20
I
Has there been a discussion anywhere about a "binary log" style
replications scheme (ala mysql?) Wherein, every write request goes to
the master, and the the slaves read in a queue of the requests and
update themselves one record at a time instead of wholesale? Or is
this just not worth the devel
Hi,
I did this. The only option I've found is to use Matt's attached solution.
I suggest just using MultiCore/CoreAdmin though.
Best,
Jacob
On Mon, Jan 5, 2009 at 8:47 AM, gwk wrote:
> Hello,
>
>
> I'm trying to get multiple instances of Solr running with Jetty as per
> the instructions on ht
onsolidate? For example, would it be
> possible for you to take any new and useful functionality that you've built
> into your client and add it to solrpy?
>
>
> Otis
> --
> Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
>
>
>
> - Origin
you can't change your php.ini you can also usually just run something
> like this:
>
> ini_set('memory_limit', '128M');
> ?>
>
> at the top of your script, at least with most distributions I've seen.
>
> --
> Steve
>
> On Dec 28, 2008,
Hi Sujatha,
Try setting the memory allotted to your servlet container to a higher
amount with the -Xmx and -Xms Java settings.
Best,
Jacob
On Sun, Dec 28, 2008 at 10:06 AM, Sujatha Arun wrote:
> Hi ,
>
> I am getting this error :
>
> Allowed memory size of 16777216 bytes exhausted (tried to all
I hacked a very incomplete one up for a recent task:
http://pastebin.ca/1294198
I don't know the status of solrpy, but if people are interested in
running with this, I can put a license header on it and add it
somwhere.
Best,
Jacob
On Tue, Dec 23, 2008 at 9:53 AM, Ed Summers wrote:
> It should
On Wed, Dec 17, 2008 at 11:06 AM, Chris Hostetter
wrote:
>
> : > : If I can find the bandwidth, I'd like to make something which allows
> : > : file uploads via the XMLUpdateHandler as well... Do you have any ideas
> : >
> : > the XmlUpdateRequestHandler already supports file uploads ... all reque
at 8:20 AM, Jacob Singh wrote:
>
>> Hi Erik,
>>
>> Sorry I wasn't totally clear. Some responses inline:
>>>
>>> If the file is visible from the Solr server, there is no need to actually
>>> send the bits through HTTP. Solr's content ste
Hi Erik,
Sorry I wasn't totally clear. Some responses inline:
> If the file is visible from the Solr server, there is no need to actually
> send the bits through HTTP. Solr's content steam capabilities allow a file
> to be retrieved from Solr itself.
>
Yeah, I know. But in my case not possible
Hi Erik,
This is indeed what I was talking about... It could even be handled
via some type of transient file storage system. this might even be
better to avoid the risks associated with uploading a huge file across
a network and might (have no idea) be easier to implement.
So I could send the fi
OST">
>
>
>
> Choose a file to upload:
>
>
>
> Cheers,
> Grant
>
> On Dec 12, 2008, at 11:53 PM, Jacob Singh wrote:
>
>> Hi Grant,
>>
>> Thanks for the quick response. My Colleague looked into the code a
>> bit, and I
y to whip up some SolrJ sample code, as I know others have asked for
> that.
>
> -Grant
>
> On Dec 12, 2008, at 5:34 AM, Jacob Singh wrote:
>
>> Hi Grant,
>>
>> Happy to.
>>
>> Currently we are sending over documents by building a big XML file of
&
12, 2008 at 4:52 AM, Grant Ingersoll wrote:
>
> On Dec 10, 2008, at 10:21 PM, Jacob Singh wrote:
>
>> Hey folks,
>>
>> I'm looking at implementing ExtractingRequestHandler in the Apache_Solr_PHP
>> library, and I'm wondering what we can do about adding m
Hey folks,
I'm looking at implementing ExtractingRequestHandler in the Apache_Solr_PHP
library, and I'm wondering what we can do about adding meta-data.
I saw the docs, which suggests you use different post headers to pass field
values along with ext.literal. Is there anyway to use the XmlUpdate
Hi folks,
I'm working on creating a schema which will accommodate the following
(likely common) scenario and was hoping for some best practices:
We have stories which are objects culled from various fields in our
database. We currently index them with a bunch of meta-data for faceting,
sorting,
Hi,
I'm trying to write a testing suite to gauge the performance of solr
searches. To do so, I'd like to be able to find out what keywords
will get me search results. Is there anyway to programaticaly do this
with luke? I'm trying to figure out what all it exposes, but I'm not
seeing this.
Any
t; Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
>
>
>
> - Original Message
>> From: Jacob Singh <[EMAIL PROTECTED]>
>> To: solr-user@lucene.apache.org
>> Sent: Sunday, September 21, 2008 12:43:09 AM
>> Subject: Re: How to keep a slave offl
xt.com/ -- Lucene - Solr - Nutch
>
>
>
> - Original Message
>> From: Jacob Singh <[EMAIL PROTECTED]>
>> To: solr-user@lucene.apache.org
>> Sent: Saturday, September 20, 2008 5:54:39 AM
>> Subject: How to keep a slave offline until the index is pu
Hi,
I'm running multiple instances (solr 1.2) on a single jetty server using JNDI.
When I launch a slave, it has to retrieve all of the indexes from the
master server using the snapuller / snapinstaller.
This works fine, however, I don't want to wait to activate the slave
(turn on jetty) while w
ww.xml.com/pub/a/2001/03/14/trxml10.html>
>
> Erik
>
>
> On Aug 9, 2008, at 11:16 PM, Otis Gospodnetic wrote:
>> No, not possible.
>>
>> Otis
>> --
>> Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
>>
>>
>>
>> --
Hello,
Is it possible to include an external xml file from within solrconfig.xml?
Or even better, to scan a directory ala conf.d in apache?
Thanks,
jacob
e args and env sections and see. It works great for me that
> way.
>
> On Fri, Aug 8, 2008 at 12:57 PM, Jacob Singh <[EMAIL PROTECTED]> wrote:
>
>> Hi Shanlin,
>>
>> Thanks for your reply.
>>
>> I tried the following:
>>
>> /opt/solr/
ache.org/solr/
>
> On Fri, Aug 8, 2008 at 6:26 AM, Jacob Singh <[EMAIL PROTECTED]> wrote:
>
>> I see in the docs:
>>
>> System property substitution
>>
>> Solr supports system property substitution, allowing the launching JVM
>> to specify string
I see in the docs:
System property substitution
Solr supports system property substitution, allowing the launching JVM
to specify string substitutions within either of Solr's configuration
files. The syntax ${property[:default value]}. Substitutions are valid
in any element or attribute text. Her
I tried:
snapshooter
bin
true
arg1 arg2
MYVAR=val1
However, it doesn't seem to work. I assumed that it default to the
solrhome, but that doesn't seem to be the case...
Is it defaulting to the jetty root? Or the conf dir?
Thanks,
Jacob
Thank you so much everyone.
This community is really one of the most helpful I've ever run across on
the interwebs.
So I understand that the document is not live in the index until a
commit is run.
Commits should be run nightly.
Isn't this a problem though to have to wait 24hrs for a new docume
tting just fine, as they show up, but the commit> script is still not
>> being called.> > Best,> jacob> Alexander Ramos Jardim wrote:> > You can
>> configure the autocommit feature in solrconfig.xml to get commit to> > work
>> from time to time or ba
;> committing just fine, as they show up, but the commit> script is still not
>> being called.> > Best,> jacob> Alexander Ramos Jardim wrote:> > You can
>> configure the autocommit feature in solrconfig.xml to get commit to> > work
>> from time to
is still not being called.
Best,
jacob
Alexander Ramos Jardim wrote:
> You can configure the autocommit feature in solrconfig.xml to get commit to
> work from time to time or based in the number of documents added to your
> index.
>
> 2008/8/7 Jacob Singh <[EMAIL PROTECTED]>
Hi,
I'm using the XML based update interface, and feeding requests to update
the index via jetty. It all works great, however now I'm trying to get
replication running, and here's what I understand:
1. An index update comes in.
2. Solr runs the commit script
3. a post-commit event is specified i
Hi Paul,
I actually use google analytics for this, since it is setup to do it.
In fact, you can configure your GA profile to treat your search page as
a search page and track the effectiveness of searches and even some
support for filters!
Check it out.
-J
pdovyda2 wrote:
> Hey Guys,
>
> I've b
Hi Hoss!
I'll check out from the svn repo. I don't think I can edit it, but
someone should update the wiki page.
Thanks a lot!
Best,
Jacob
Chris Hostetter wrote:
> : I will look into the listener, but what about the first part of my
> : question? It says it is failing, but doesn't look like it
st,
Jacob
Noble Paul നോബിള് नोब्ळ् wrote:
> How are you committing? did you use the commit script?
>
>
>
>
> On Thu, Jul 10, 2008 at 7:39 PM, Jacob Singh <[EMAIL PROTECTED]> wrote:
>> Thanks Noble, kya bath hai? Nice Hindi :) Can't read the Thai.
> Knowin
> Commmit automaticallly does not create snapshots. You must register
> the listener to do so
>
> http://wiki.apache.org/solr/CollectionDistribution#head-532ab57f4a3a9cc3ce129a9fb698a01aceb6d0c2
>
> --Noble
>
>
> On Thu, Jul 10, 2008 at 11:56 AM, Jacob Singh <[EMAIL PROTECTED
Hi,
I'm trying to get replication working, and it's failing because commit
refuses to work (at least as I understand it).
I run commit and point it to the update URL. I know the URL is correct,
because solr returns something to me:
commit request to Solr at http://solr.solrflare.com:8080/solr/a
My total guess is that indexing is CPU bound, and searching is RAM bound.
Best,
Jacob
Ian Connor wrote:
> There was a thread a while ago, that suggested just need to factor in
> the index's total size (Mike Klaas I think was the author). It was
> suggested having the RAM is enough and the OS will
Hey,
Sorry to bug everyone again in my newbieness, but this is a quick one, I
promise :)
I'm running a master and a slave, both on debian using jetty6 (from deb)
jetty6 runs under user jetty which has no group. It writes files as
jetty.nogroup 664.
This means my data directory is 664.
jetty i
Hi,
I just managed ot hack the distribution scripts a little so that I can
specify a different rsyncd module so that I can have multiple indexes
rsyncing from the same server on the same port! yeh!
Okay, so I'm very excited that it's almost working, but I have one
pretty huge issue. Everytime I
I've read up quite a bit now.
Thanks,
Jacob
>
> Bill
>
> On Tue, Jun 10, 2008 at 4:24 AM, Jacob Singh <[EMAIL PROTECTED]> wrote:
>
>> Hey folks,
>>
>> I'm messing around with running multiple indexes on the same server
>> using Jetty conte
er second. (the whole index did fit into ram)
> I can send you my saved test case if this would help you.
>
> Nico
>
>
> Jacob Singh wrote:
>> Hi Nico,
>>
>> Thanks for the info. Do you have you scripts available for this?
>>
>> Also, is it configur
time seems to rise linear for a little while and
> then exponentially. But that might also be the result of my test szenario.
>
> Nico
>
>
>> -Original Message-
>> From: Jacob Singh [mailto:[EMAIL PROTECTED]
>> Sent: Sunday, June 29, 2008 6:04 PM
>> T
would aproximately tell us where a server will start to
bail.
Does anyone have any better ideas?
Best,
Jacob Singh
Hi,
I see this has been discussed:
http://www.mail-archive.com/solr-user@lucene.apache.org/msg08150.html
and I've read the wiki.
I've got replication working okay, but I'm not trying to do replication.
Rather, I want to:
1. Get a hot backup of a master server (meaning no interruption of servic
cepts while the
> switch is in progress.
>
>
> You could also have M1 and M2 access the same index instance (e.g. on a SAN)
> and avoid index replication, thus minimizing interruption time.
>
> Otis
> --
> Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
&
Hi Rusli,
Is there a URL you'd like to reference for where you got the patch?
That would probably help.
For windows I suppose you'll have to google around to find a version of
"patch" which runs there. Beyond Compare is a windows app which has
patching capabilities. patch is a program for *nix
Hi again :)
I'm also working on a scenario where there is an architecture like this:
(here comes poor man's Visio)
M2
|
M1
|
---
/ \
S1 S2
The catch is M2 isn't always online. The idea being, M1 is online to
take small updates like removing a certain entry from index or one off
ch
Hey folks,
I'm messing around with running multiple indexes on the same server
using Jetty contexts. I've got the running groovy thanks to the
tutorial on the wiki, however I'm a little confused how the collection
distribution stuff will work for replication.
The rsyncd-enable command is simple
69 matches
Mail list logo