I have an application where I am calling DirectUpdateHandler2 directly with:
update.addDoc(cmd);
This will sometimes hit:
java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.util.UnicodeUtil.UTF16toUTF8(UnicodeUtil.java:248)
at org.apache.lucene.store.DataOutput.writeString(DataOu
I have an application where I need to return all results that are not
in a Set (the Set is managed from hazelcast... but that is
not relevant)
As a fist approach, i have a SerachComponent that injects a BooleanQuery:
BooleanQuery bq = new BooleanQuery(true);
for( String id : ids) {
If there is a real desire/need to make things "restful" in the
official sense, it is worth looking at using a REST framework as the
controller rather then the current solution. perhaps:
http://www.restlet.org/
https://jersey.dev.java.net/
These would be cool since they encapsulate lots of the re
Any pointers on how to sort by reverse index order?
http://search.lucidimagination.com/search/document/4a59ded3966271ca/sort_by_index_order_desc
it seems like it should be easy to do with the function query stuff,
but i'm not sure what to sort by (unless I add a new field for indexed
time)
Any p
Looks like you can sort by _docid_ to get things in index order or
reverse index order.
?sort=_docid_ asc
thank you solr!
On Fri, Jul 23, 2010 at 2:23 PM, Ryan McKinley wrote:
> Any pointers on how to sort by reverse index order?
> http://search.lucidimagination.com/search/do
I have a function that works well in 3.x, but when I tried to
re-implement in 4.x it runs very very slow (~20ms vs 45s on an index w
~100K items).
Big picture, I am trying to calculate a bounding box for items that
match the query. To calculate this, I have two fields bboxNS, and
bboxEW that get
ed; Yonik any ideas? I'm not familiar with this part of
> Solr...
>
> Mike
>
> On Mon, Aug 23, 2010 at 2:38 AM, Ryan McKinley wrote:
>> I have a function that works well in 3.x, but when I tried to
>> re-implement in 4.x it runs very very slow (~20ms vs 45s on
Note that the 'setRequestWriter' is not part of the SolrServer API, it
is on the CommonsHttpSolrServer:
http://lucene.apache.org/solr/api/org/apache/solr/client/solrj/impl/CommonsHttpSolrServer.html#setRequestWriter%28org.apache.solr.client.solrj.request.RequestWriter%29
If you are using EmbeddedS
Check:
http://lucene.apache.org/java/3_0_2/fileformats.html
On Tue, Sep 7, 2010 at 3:16 AM, rajini maski wrote:
> All,
>
> While we post data to Solr... The data get stored in "//data/index" path
> in some multiple files with different file extensions...
> Not worrying about the extensions, I
> I suppose an index 'remaker' might be something like a DIH reader for
> a Solr index - streams everything out of the existing index, writing
> it into the new one?
This works fine if all fields are stored (and copy field does not go
to a stored field), otherwise you would need/want to start with
check:
http://wiki.apache.org/solr/LukeRequestHandler
On Mon, Sep 13, 2010 at 7:00 PM, Peter A. Kirk wrote:
> Hi
>
> is it possible to issue a query to solr, to get a list which contains all the
> field names in the index?
>
> What about to get a list of the freqency of individual words in eac
Multiple threads work well.
If you are using solrj, check the StreamingSolrServer for an
implementation that will keep X number of threads busy.
Your mileage will very, but in general I find a reasonable thread
count is ~ (number of cores)+1
On Wed, Sep 22, 2010 at 5:52 AM, Andy wrote:
> Does
*:*
will leave you a fresh index
On Thu, Sep 23, 2010 at 12:50 AM, xu cheng wrote:
> the query that fetch the data you wanna
> delete
> I did like this to delete my data
> best regards
>
> 2010/9/23 Igor Chudov
>
>> Let's say that I added a number of elements to Solr (I use
>> Webservice::Solr
On Mon, Oct 18, 2010 at 10:12 AM, Tharindu Mathew wrote:
> Thanks Peter. That helps a lot. It's weird that this not documented anywhere.
> :(
Feel free to edit the wiki :)
Do you already have the files as solr XML? If so, I don't think you need solrj
If you need to build SolrInputDocuments from your existing structure,
solrj is a good choice. If you are indexing lots of stuff, check the
StreamingUpdateSolrServer:
http://lucene.apache.org/solr/api/solrj/org/apache/
I have an indexing pipeline that occasionally needs to check if a
document is already in the index (even if not commited yet).
Any suggestions on how to do this without calling before each check?
I have a list of document ids and need to know which ones are in the
index (actually I need to know
I'm looking for a way to quickly flag/unflag documents.
This could be one at a time or by query (even *:*)
I have hacked together something based on ExternalFileField that is
essentially a FST holding all the ids (solr not lucene). Like the
FieldCache, it holds a WeakHashMap where the
OpenBitSet
Hi Matthias-
I'm trying to understand how you have your data indexed so we can give
reasonable direction.
What field type are you using for your locations? Is it using the
solr spatial field types? What do you see when you look at the debug
information from &debugQuery=true?
>From my experienc
On Wed, Mar 7, 2012 at 7:25 AM, Matt Mitchell wrote:
> Hi,
>
> I'm researching options for handling a better geospatial solution. I'm
> currently using Solr 3.5 for a read-only "database", and the
> point/radius searches work great. But I'd like to start doing point in
> polygon searches as well.
There have been a bunch of changes getting the zookeeper info and UI
looking good. The info moved from being on the core to using a
servlet at the root level.
Note, it is not a request handler anymore, so the wt=XXX has no
effect. It is always JSON
ryan
On Fri, Apr 6, 2012 at 7:01 AM, Jamie J
zookeeper.jsp was removed (along with all JSP stuff) in trunk
Take a look at the cloud tab in the UI, or check the /zookeeper
servlet for the JSON raw output
ryan
On Mon, Apr 9, 2012 at 6:42 AM, Benson Margulies wrote:
> Starting the leader with:
>
> java -Dbootstrap_confdir=./solr/conf -Dcol
In general -- i would not suggest mixing EmbeddedSolrServer with a
different style (unless the other instances are read only). If you
have multiple instances writing to the same files on disk you are
asking for problems.
Have you tried just using StreamingUpdateSolrServer for daily update?
I woul
I would suggest debugging with browser requests -- then switching to
Solrj after you are at 1st base.
In particular, try adding the &debugQuery=true parameter to the
request and see what solr thinks is happening.
The value that will "work" for the 'qt' parameter depends on what is
configured in s
check a release since r1332752
If things still look problematic, post a comment on:
https://issues.apache.org/jira/browse/SOLR-3426
this should now have a less verbose message with an older SLF4j and with Log4j
On Tue, May 1, 2012 at 10:14 AM, Gopal Patwa wrote:
> I have similar issue using lo
If your json value is & the proper xml value is &
What is the value you are setting on the stored field? is is & or &?
On Mon, Apr 30, 2012 at 12:57 PM, William Bell wrote:
> One idea was to wrap the field with CDATA. Or base64 encode it.
>
>
>
> On Fri, Apr 27, 2012 at 7:50 PM, Bill Bell
thanks!
On Wed, May 2, 2012 at 4:43 PM, Chris Hostetter
wrote:
>
> : How do I search for things that have no value or a specified value?
>
> Things with no value...
> (*:* -fieldName:[* TO *])
> Things with a specific value...
> fieldName:A
> Things with no value or a specific val
In 4.0, solr no longer uses JSP, so it is not enabled in the example setup.
You can enable JSP in your servlet container using whatever method
they provide. For Jetty, using start.jar, you need to add the command
line: java -jar start.jar -OPTIONS=jsp
ryan
On Mon, May 14, 2012 at 2:34 PM, Nag
the right zookeeper url in 4.0 please?
>
> Thanks
> Naga
>
>
> On 5/15/12 10:56 AM, "Ryan McKinley" wrote:
>
>>In 4.0, solr no longer uses JSP, so it is not enabled in the example
>>setup.
>>
>>You can enable JSP in your servlet container u
for the ExtractingRequestHandler, you can put anything into the
request contentType.
try:
addFile( file, "application/octet-stream" )
but anything should work
ryan
On Thu, Jun 7, 2012 at 2:32 PM, Koorosh Vakhshoori
wrote:
> In latest 4.0 release, the addFile() method has a new argument 'con
also try &debugQuery=true and see why each result matched
On Thu, Dec 30, 2010 at 4:10 PM, mrw wrote:
>
>
> Basically, just what you've suggested. I did the field/query analysis piece
> with verbose output. Not entirely sure how to interpret the results, of
> course. Currently reading anythi
>
> Where do you get your Lucene/Solr downloads from?
>
> [] ASF Mirrors (linked in our release announcements or via the Lucene website)
>
> [X] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
>
> [X] I/we build them from source via an SVN/Git checkout.
>
I am using the edismax query parser -- its awesome! works well for
standard dismax type queries, and allows explicit fields when
necessary.
I have hit a snag when people enter something that looks like a windows path:
F:\path\to\a\file
this gets parsed as:
F:\path\to\a\file
F:\path\to\a\file
+
ah -- that makes sense.
Yonik... looks like you were assigned to it last week -- should I take
a look, or do you already have something in the works?
On Thu, Feb 10, 2011 at 2:52 PM, Chris Hostetter
wrote:
>
> : extending edismax. Perhaps when F: does not match a given field, it
> : could auto
>
> foo_s:foo\-bar
> is a valid lucene query (with only a dash between the foo and the
> bar), and presumably it should be treated the same in edismax.
> Treating it as foo_s:foo\\-bar (a backslash and a dash between foo and
> bar) might cause more problems than it's worth?
>
I don't think we shou
I have an odd need, and want to make sure I am not reinventing a wheel...
Similar to the QueryElevationComponent, I need to be able to move
documents to the top of a list that match a given query.
If there were no sort, then this could be implemented easily with
BooleanQuery (i think) but with so
You may want to check the stats via JMX. For example,
http://localhost:8983/solr/core/admin/mbeans?stats=true&key=org.apache.solr.handler.StandardRequestHandler
shows some basic stats info for the handler.
If you are running nagios or similar, they have tools that can log
values from JMX. this
t_By_Function
On Fri, Feb 11, 2011 at 4:31 PM, Ryan McKinley wrote:
> I have an odd need, and want to make sure I am not reinventing a wheel...
>
> Similar to the QueryElevationComponent, I need to be able to move
> documents to the top of a list that match a given query.
>
>
Not crazy -- but be aware of a few *key* caviates.
1. Do good testing on a stable snapshot.
2. Don't get surprised if you have to rebuild the index from scratch
to upgrade in the future. The official releases will upgrade smoothly
-- but within dev builds, anything may happen.
On Sat, Feb 19,
You may have noticed the ResponseWriter code is pretty hairy! Things
are package protected so that the API can change between minor release
without concern for back compatibility.
In 4.0 (/trunk) I hope to rework the whole ResponseWriter framework so
that it is more clean and hopefully stable eno
> Does anyone know of a patch or even when this functionality might be included
> in to Solr4.0? I need to query for polygons ;-)
check:
http://code.google.com/p/lucene-spatial-playground/
This is my sketch / soon-to-be-proposal for what I think lucene
spatial should look like. It includes a WK
You can store binary data using a binary field type -- then you need
to send the data base64 encoded.
I would strongly recommend against storing large binary files in solr
-- unless you really don't care about performance -- the file system
is a good option that springs to mind.
ryan
2011/4/6
Hello-
I'm looking for a way to find all the links from a set of results. Consider:
id:1
type:X
link:a
link:b
id:2
type:X
link:a
link:c
id:3
type:Y
link:a
Is there a way to search for all the links from stuff of type X -- in
this case (a,b,c)
If I'm understanding the {!join
On Fri, Jul 1, 2011 at 9:06 AM, Yonik Seeley wrote:
> On Thu, Jun 30, 2011 at 6:19 PM, Ryan McKinley wrote:
>> Hello-
>>
>> I'm looking for a way to find all the links from a set of results. Consider:
>>
>>
>> id:1
>> type:X
>> lin
>
> Ah, thanks Hoss - I had meant to respond to the original email, but
> then I lost track of it.
>
> Via pseudo-fields, we actually already have the ability to retrieve
> values via FieldCache.
> fl=id:{!func}id
>
> But using CSF would probably be better here - no memory overhead for
> the FieldC
patches are always welcome!
On Tue, Jul 5, 2011 at 3:04 PM, Yonik Seeley wrote:
> On Mon, Jul 4, 2011 at 11:54 AM, Per Newgro wrote:
>> i've tried to add the params for group=true and group.field=myfield by using
>> the SolrQuery.
>> But the result is null. Do i have to configure something? In
If you optimize the index, are the results the same?
maybe it is showing counts for deleted docs (i think it does... and
this is expected)
ryan
On Sat, Aug 25, 2012 at 9:57 AM, Fuad Efendi wrote:
>
> This is bug in Solr 4.0.0-Beta Schema Browser: "Load Term Info" shows "9682
> News", but direc
Hi-
I am trying to add a setting that will boost results based on
existence in different buckets. Using edismax, I added the bq
parameter:
location:A^5 location:B^3
I want this to put everything in location A above everything in
location B. This mostly works, BUT depending on the number of mat
thanks!
On Fri, Oct 26, 2012 at 4:20 PM, Chris Hostetter
wrote:
> : How about a boost function, "bf" or "boost"?
> :
> : bf=if(exists(query(location:A)),5,if(exists(query(location:B)),3,0))
>
> Right ... assuming you only want to ignore tf/idf on these fields in this
> specifc context, function
check:
https://issues.apache.org/jira/browse/SOLR-945
this will not likely make it into 1.4
On Jul 30, 2009, at 1:41 PM, Jérôme Etévé wrote:
Hi,
Nope, I'm not using solrj (my client code is in Perl), and I'm with
solr 1.3.
J.
2009/7/30 Shalin Shekhar Mangar :
On Thu, Jul 30, 2009 at 8
On Aug 19, 2009, at 6:45 AM, johan.sjob...@findwise.se wrote:
Hi,
we're glancing at the GEO search module known from the jira issue 773
(http://issues.apache.org/jira/browse/SOLR-773).
It seems to us that the issue is still open and not yet included in
the
nightly builds.
correct
Is
On Aug 26, 2009, at 3:33 PM, djain101 wrote:
I have one quick question...
If in solrconfig.xml, if it says ...
${solr.abortOnConfigurationError:false}abortOnConfigurationError>
does it mean defaults to false if it is
not set
as system property?
correct
On Aug 27, 2009, at 10:35 PM, Paul Tomblin wrote:
Yesterday or the day before, I asked specifically if I would need to
restart the Solr server if somebody else loaded data into the Solr
index using the EmbeddedServer, and I was told confidently that no,
the Solr server would see the new data as
can you just add a new field that has the real or ave price?
Just populate that field at index time... make it indexed but not
stored
If you want the real or average price to be treated the same in
faceting, you are really going to want them in the same field.
On Aug 28, 2009, at 1:16 PM
Should be fixed in trunk. Try updating and see if it works for you
See:
https://issues.apache.org/jira/browse/SOLR-1424
On Sep 9, 2009, at 8:12 PM, Allahbaksh Asadullah wrote:
Hi ,
I am building Solr from source. During building it from source I am
getting
following error.
generate-mave
do you have anything custom going on?
The fact that the lock is in java2d seems suspicious...
On Sep 23, 2009, at 7:01 PM, pof wrote:
I had the same problem again yesterday except the process halted
after about
20mins this time.
pof wrote:
Hello, I was running a batch index the other
Hello-
I have an application that can run in the background on a user Desktop
-- it will go through phases of being used and not being used. I want
to be able to free as many system resources when not in use as possible.
Currently I have a timer that wants for 10 mins of inactivity and
r
I wonder why the common classes are in the solrj JAR?
Is the solrj JAR not just for the clients?
the solr server uses solrj for distributed search. This makes solrj
the general way to talk to solr (even from within solr)
I'm sure it is possible to configure JDK logging (java.util.loging)
programatically... but I have never had much luck with it.
It is very easy to configure log4j programatically, and this works
great with solr.
To use log4j rather then JDK logging, simply add slf4j-
log4j12-1.5.8.jar (from
On Nov 2, 2009, at 8:29 AM, Grant Ingersoll wrote:
On Nov 2, 2009, at 12:12 AM, Licinio Fernández Maurelo wrote:
Hi folks,
as we are using an snapshot dependecy to solr1.4, today we are
getting
problems when maven try to download lucene 2.9.1 (there isn't a any
2.9.1
there).
Which rep
The HTMLStripCharFilter will strip the html for the *indexed* terms,
it does not effect the *stored* field.
If you don't want html in the stored field, can you just strip it out
before passing to solr?
On Nov 11, 2009, at 8:07 PM, aseem cheema wrote:
Hey Guys,
How do I add HTML/XML docum
It looks like solr+spatial will get some attention in 1.5, check:
https://issues.apache.org/jira/browse/SOLR-1561
Depending on your needs, that may be enough. More robust/scaleable
solutions will hopefully work their way into 1.5 (any help is always
appreciated!)
On Nov 13, 2009, at 11:12
Also:
https://issues.apache.org/jira/browse/SOLR-1302
On Nov 13, 2009, at 11:12 AM, Bertie Shen wrote:
Hey,
I am interested in using LocalSolr to go Local/Geo/Spatial/Distance
search. But the wiki of LocalSolr(http://wiki.apache.org/solr/LocalSolr
)
points to pretty old documentation. Is t
Solr includes slf4j-jdk14-1.5.5.jar, if you want to use the nop (or
log4j, or loopback) impl you will need to include that in your own
project.
Solr uses slf4j so that each user can decide their logging
implementation, it includes the jdk version so that something works
off-the-shelf, but
check:
http://wiki.apache.org/solr/SolrLogging
if you are using 1.4 you want to drop in the slf4j-log4j jar file and
then it should read your log4j configs
On Nov 19, 2009, at 2:15 PM, Harsch, Timothy J. (ARC-TI)[PEROT
SYSTEMS] wrote:
Hi all,
I have an J2EE application using embedded so
On Jun 24, 2008, at 12:07 AM, Norberto Meijome wrote:
hi all,
( I'm using 1.3 nightly build from 15th June 08.)
Is there some documentation about how analysers + tokenizers are
applied in
fields ? In particular, my question :
best docs are here:
http://wiki.apache.org/solr/AnalyzersToken
also, check the LukeRequestHandler
if there is a document you think *should* match, you can see what
tokens it has actually indexed...
On Jun 24, 2008, at 7:12 PM, Norberto Meijome wrote:
hi,
I'm trying to understand why a search on a field tokenized with the
nGram
tokenizer, with minGram
Hi-
I'm working on a case where we have review text that may include words
that describe what the item is *not*.
Given the text "the kitten is not clean", searching for "clean" should
not include (at least at the top) the kitten.
The approach I am considering is to copy the text to a nega
Any thoughts / ideas on how to make formatting and laying out custom
results less obtuse?
$sj('').html(item.id).appendTo(this.target);
seems ok for simple things -- like a list -- but not very designer
friendly.
ryan
On Jul 1, 2008, at 3:00 AM, Matthias Epheser wrote:
Hi community,
a
Not sure exactly what you are asking for -- I'll answer a few versions:
Do you have an existing index and want to change the field "A" to
"duck" for every document? If so, there is no way to do that off the
shelf -- check SOLR-139 for an option (but the current patch will not
work)
Do yo
The random sort field in solr 1.3 relies on the field name and dynamic
fields for ordering. Check the example solrconfig.xml in 1.3
to get random results, try various field names:
&sort=rand_123 asc
&sort=rand_xyz asc
&sort=rand_{generate your random number on the client} asc
This is
nothing to automatically create a new index, but check the multicore
stuff to see how you could implement this:
http://wiki.apache.org/solr/MultiCore
On Jul 8, 2008, at 10:25 AM, Willie Wong wrote:
Hi,
Sorry if this question sounds daft but I was wondering if there
was
anything built i
If all you are doing is stripping text from HTML, the best option is
probably to just do that on the client *before* you send it to solr.
If you need to do something more complex -- or that needs to rely on
other solr configurations you can consider using an
UpdateRequestProcessor. Likely
re-reading your post...
Shalin is correct, just use the snapshooter script to create a point-
in-time snapshot of the index. The multicore stuff will not help with
this.
ryan
On Jul 8, 2008, at 11:09 AM, Shalin Shekhar Mangar wrote:
Hi Willie,
If you want to have backups (point-in-time
question though, after snapshot has been created is there a
way to
totally clear out the contents in the master index - or have solr
recreate
the data directory?
Thanks,
Willie
Ryan McKinley <[EMAIL PROTECTED]>
08/07/2008 11:17 AM
Please respond to
solr-user@lucene.apache.org
T
Is anyone out there using nagios to monitor solr?
I remember some discussion of this in the past around exposing
response handler timing info so it could play nice with nagios... did
anyone get anywhere with this? want to share :)
Any other pointers to solr monitoring tools would be good too.
t
On Jul 15, 2008, at 2:45 AM, Sunil wrote:
Hi All,
I want to change the duplicate content behavior in solr. What I want
to
do is:
1) I don't want duplicate content.
2) I don't want to overwrite old content with new one.
Means, if I add duplicate content in solr and the content already
exis
On Jul 15, 2008, at 10:31 AM, Fuad Efendi wrote:
Thanks Ryan,
Is really unique if we allow duplicates? I had similar
problem...
if you allowDups, then uniqueKey may not be unique...
however, it is still used as the key for many items.
Quoting Ryan McKinley <[EMAIL PROTEC
Hi-
I'm messing with spellchecking and running into behavior that seems
peculiar. We have an index with many words including:
"swim" and "slim"
If I search for "slim", it returns "swim" as an option -- likewise, if
I search for "slim" it returns "swim"
why does it check words that are in
I have a use case where I want to spellcheck the input query across
multiple fields:
Did you mean: location = washington
vs
Did you mean: person = washington
The current parameter / response structure for the spellcheck
component does not support this kind of thing. Any thoughts on how/i
.
This is a definite workaround, but I think it might work. Hmm,
except we only have one QueryConverter
-Grant
On Jul 15, 2008, at 8:56 PM, Ryan McKinley wrote:
I have a use case where I want to spellcheck the input query across
multiple fields:
Did you mean: location = washington
vs
(assuming you are using 1.3-dev), you could use the dismax query
parser syntax for the fq param. I think it is something like:
fq=your query
I can't find the syntax now (Yonik?)
but I don't know how you could pull out the qf,pf,etc fields for the
fq portion vs the q portion.
On Jul 16,
I found that in org.apache.solr.servlet.SolrServlet.java, always
PrintWriter object is sent as input parameter.
SolrServlet is deprecated.
If you are going to use new features like MultiCore, make sure you
have the XmlUpdateRequestHandler registered to /update
class="solr.XmlUpdateRequ
committed in rev 678204
thanks nobel!
On Jul 19, 2008, at 2:40 PM, Noble Paul നോബിള്
नोब्ळ् wrote:
A patch is submitted in SOLR-536
On Sat, Jul 19, 2008 at 11:23 PM, Noble Paul നോബിള്
नोब्ळ्
<[EMAIL PROTECTED]> wrote:
meanwhile , you can manage by making the field
List categories;
Currently, there are not any helper functions to pick out spellcheck
info.
But you can always use:
NamedList getResponse()
to pick out the data contained in the response:
Adding spellcheck functions to QueryResponse would be a welcome
contribution!
On Jul 21, 2008, at 12:51 PM, Jon Baer
I can't figure how to use the poll either...
here are a few others to check out:
http://lapnap.net/solr/
perhaps "a" and "f" could live together, you use 'a' if you need a
background other then white
On Jul 21, 2008, at 2:14 PM, Mike Klaas wrote:
On 20-Jul-08, at 6:19 PM, Mark Miller wrote
nor does http://selectricity.org/
On Jul 21, 2008, at 2:28 PM, Shalin Shekhar Mangar wrote:
Too bad the polls created with Google docs don't support images in
them (or
atleast i couldn't figure out how to do it)
On Mon, Jul 21, 2008 at 11:52 PM, Ryan McKinley <[EMAIL PROTEC
bug that needs fixed! Can you file a jira ticket?
On Jul 24, 2008, at 12:50 PM, kalyan chakravarti wrote:
Forgot to mention, I am using dismax queryhandler. I just tested
this with out of box latest nightly build and it throws the same
error.
http://localhost:8982/select/?q.alt=*:*&qt=di
core.getDataDir()
what kind of plugin? If you don't have access to core, you can
implement SolrCoreAware...
On Jul 25, 2008, at 2:27 PM, Mark Miller wrote:
How do I get the solr / data dir from a plugin without using
anything thats deprecated?
- Mark
In general though i wondering if steping back a bit and modifying your
request handler to use a SolrDocumentList where you've already
flattened
the ExternalFileField into each SolrDocument would be an easier
approach
-- then you wouldnt' need to modify the ResponseWriter at all.
Consider
Check: https://issues.apache.org/jira/browse/SOLR-646
hopefully that will solve your problems...
On Aug 1, 2008, at 4:35 PM, CameronL wrote:
The dataDir parameter specified in the element in
multicore.xml
does not seem to point to the correct directory. I commented out the
element from
If there is a still room for new log design for Solr and the
community is
open for it then I can try to come up with some proposal. Doing logo
for
Mahout was really interesting experience.
In my opinion, yes I'd love to see more effort put towards the
logo. I have stayed out of t
I don't know about JMX, but check the standard multi-core config...
If you are running things in multi-core mode, you can send a RELOAD
command:
http://wiki.apache.org/solr/MultiCore#head-429a06cb83e1ce7b06857fd03c38d1200c4bcfc1
On Aug 5, 2008, at 2:39 PM, Kashyap, Raghu wrote:
One of the r
Check: http://wiki.apache.org/solr/MultiCore
If you can wait a few days, there will likely be a 1.3 release
candidate out soon.
On Aug 13, 2008, at 11:30 AM, McBride, John wrote:
Hi,
I am deploying an application across 3 geographies - and as a result
will be running multiple solr instan
check a recent version, this issue should have been fixed in:
https://issues.apache.org/jira/browse/SOLR-545
On Aug 13, 2008, at 2:22 PM, Doug Steigerwald wrote:
Yeah, that's the problem. Not having the core in the URL you're
posting to shouldn't update any core, but it does.
Doug
On Aug
the dataDir is configured in solrconfig.xml
With multicore it is currently a bit wonky. Currenlty, you need to
configure it explicitly for each core, but it shares the same system
variables: ${solr.data.dir}, so if you use properties, you end up
pointing to the same place.
https://issues
84606)?
On Aug 13, 2008, at 3:00 PM, Ryan McKinley wrote:
check a recent version, this issue should have been fixed in:
https://issues.apache.org/jira/browse/SOLR-545
On Aug 13, 2008, at 2:22 PM, Doug Steigerwald wrote:
Yeah, that's the problem. Not having the core in the URL you'
check now. Should be fixed in trunk
On Aug 13, 2008, at 3:05 PM, Doug Steigerwald wrote:
I checked out the trunk about 2 hours ago. Was the last commit on
the 10th supposed to fix this (r684606)?
On Aug 13, 2008, at 3:00 PM, Ryan McKinley wrote:
check a recent version, this issue
On Aug 13, 2008, at 3:29 PM, Andrew Nagy wrote:
Thanks for clarifing that Ryan - I was a bit confused too...
Before 1.3 is released, you will either be able to:
1. set the dataDir from your solr.xml config
I have been perusing the multicore code and found that the "default"
attribute
I'm looking for a way to get common word groups within documents.
That is, what are the top two, three, ... n word groups within the
index.
I was messing with indexing adjacent words together (sorry about the
earlier commit)... is this a reasonable approach? Any other ideas for
pulling
/analysis/shingle/ShingleFilter.html
). I have it set up to build 'shingles' of size 2, 3, 4, 5 which I
index into separate fields. If there is a better way of doing this
sort of thing I'd love to know :-)
Brendan
On Aug 13, 2008, at 3:59 PM, Ryan McKinley wrote:
I'm
1 - 100 of 707 matches
Mail list logo