Chris Hostetter-3 wrote:
>
> sorting on a multivalued is defined to have un-specified behavior. it
> might fail with an error, or it might fail silently.
>
I learned this the hard way, it failed silently for a long time until it
failed with an error:
http://lucene.472066.n3.nabble.com/Diffe
Hi,
I ran into this problem again the other night. I've looked through my log
files in more detail, and nothing seems out of place (I stripped user
queries out and included it below). I have the following setup:
1. Indexer has 2 cores. One core gets incremental updates, the other is for
full re-sy
Hi Hoss,
I realize I'm reviving a really old thread, but I have the same need, and
SpanNumericRangeQuery sounds like a good solution for me. Can you give me
some guidance on how to implement that?
Thanks,
Wojtek
--
View this message in context:
http://lucene.472066.n3.nabble.com/One-item-multip
I've been asked to suggest a framework for managing a website's content and
making all that content searchable. I'm comfortable using Solr for search,
but I don't know where to start with the content management system. Is
anyone using a CMS (open source or commercial) that you've integrated with
S
Thanks for the responses. I'll give Drupal a shot. It sounds like it'll do
the trick, and if it doesn't then at least I'll know what I'm looking for.
Wojtek
--
View this message in context:
http://www.nabble.com/Solr-CMS-Integration-tp24868462p24870218.html
Sent from the Solr - User mailing lis
Hi Asif,
Did you end up implementing this as a custom sort order for facets? I'm
facing a similar problem, but not related to time. Given 2 terms:
A: appears twice in half the search results
B: appears once in every search result
I think term A is more "interesting". Using facets sorted by freque
I'm trying to figure out if Solr is the right solution for a problem I'm
facing. I have 2 data entities: P(arent) & C(hild). P contains up to 100
instances of C. I need to expose an interface that searches attributes of
entity C, but displays them grouped by parent entity, P. I need to include
fac
Funtick wrote:
>
>>then 2) get all P's by ID, including facet counts, etc.
>>The problem I face with this solution is that I can have many matching P's
> (10,000+), so my second query will have many (10,000+) constraints.
>
> SOLR can automatically provide you P's with Counts, and it will be
>
I'm trying to create data backups using the ReplicationHandler's built in
functionality. I've configured my master as
http://wiki.apache.org/solr/SolrReplication documented :
...
optimize
...
but I don't see any backups created on the master. Do I need the snapshooter
scrip
Hi,
I'm writing a function query to score documents based on Levenshtein
distance from a string. I want my function calls to look like:
lev(myFieldName, 'my string to match')
I'm running into trouble parsing the string I want to match ('my string to
match' above). It looks like all the built in
I'm using trunk from July 8, 2009. Do you know if it's more recent than that?
Noble Paul നോബിള് नोब्ळ्-2 wrote:
>
> which version of Solr are you using? the "backupAfter" name was
> introduced recently
>
--
View this message in context:
http://www.nabble.com/Backups-using-Replication-tp25
It looks like parseArg was added on Aug 20, 2009. I'm working with slightly
older code. Thanks!
Noble Paul നോബിള് नोब्ळ्-2 wrote:
>
> did you implement your own ValueSourceParser . the
> FunctionQParser#parseArg() method supports strings
>
> On Wed, Sep 9, 2009 at 12:10
Do you mean that it's been renamed, so this should work?
...
optimize
...
Noble Paul നോബിള് नोब्ळ्-2 wrote:
>
> before that backupAfter was called "snapshot"
>
--
View this message in context:
http://www.nabble.com/Backups-using-Replication-tp25350083p25407695.
I've verified that renaming backAfter to snapshot works (I should've checked
before asking). Thanks Noble!
wojtekpia wrote:
>
>
>
>
> ...
> optimize
> ...
>
>
>
>
>
--
View this message in context:
h
Hi,
I'm trying to import data from a list of files using the
FileListEntityProcessor. Here is my import configuration:
If I have only one file in d:\my\directory\ then everything works correctly.
If I have multiple files then I get the following exception:
Sep
Fergus McMenemie-2 wrote:
>
>
> Can you provide more detail on what you are trying to do? ...
> You seem to listing all files "d:\my\directory\.*WRK". Do
> these WRK files contain lists of files to be indexed?
>
>
That is my complete data config file. I have a directory containing a bunch
Note that if I change my import file to explicitly list all my files (instead
of using the FileListEntityProcessor) as below then everything works as I
expect.
...
--
View this message in context:
http://www.nabble.com/FileListEntityProcessor-and-LineEntityProcessor-tp25
I want to build a FunctionQuery that scores documents based on a multi-valued
field. My intention was to use the field cache, but that doesn't get me
multiple values per document. I saw other posts suggesting UnInvertedField
as the solution. I don't see a method in the UnInvertedField class that w
Hi,
I'm running Solr version 1.3.0.2009.07.08.08.05.45 in 2 environments. I have
a field defined as:
The two environments have different data, but both have single and multi
valued entries for myDate.
On one environment sorting by myDate works (sort seems to be by the 'last'
value if multi va
Hi,
I'm trying to change the masterUrl of a search slave at runtime. So far I've
found 2 ways of doing it:
1. Change solrconfig_slave.xml on master, and have it replicate to
solrconfig.xml on the slave
2. Change solrconfig.xml on slave, then issue a core reload command. (a side
note: can I issue
I'm seeing the same behavior and I don't have any custom query parsing
plugins. Similar to the original post, my queries like:
select?q=field:[1 TO *]
select?q=field:[1 TO 2]
select?q=field:[1 TO 2]&debugQuery=true
work correctly, but including an unboundd range appears to break the debug
compon
at's the field
> type?
>
> -Yonik
> http://www.lucidimagination.com
>
> On Fri, Oct 16, 2009 at 3:01 PM, wojtekpia wrote:
>>
>> I'm seeing the same behavior and I don't have any custom query parsing
>> plugins. Similar to the original post, my queries li
I ran into trouble running several cores (either as Solr multi-core or as
separate web apps) in a single JVM because the Java garbage collector would
freeze all cores during a collection. This may not be an issue if you're not
dealing with large amounts of memory. My solution is to run each web ap
I was thinking of going this route too because I've found that parsing XML
result sets using XmlDocument + XPath can be very slow (up to a few seconds)
when requesting ~100 documents. Are you getting good performance parsing
large result sets? Are you using SAX instead of DOM?
Thanks,
Wojtek
ma
Could this be solved with a multi-valued custom field type (including a
custom comparator)? The OP's situation deals with multi-valuing products for
each customer. If products contain strictly numeric fields then it seems
like a custom field implementation (or extension of BinaryField?) *should*
b
I'm trying to load ~10 million records into Solr using the DataImportHandler.
I'm running out of memory (java.lang.OutOfMemoryError: Java heap space) as
soon as I try loading more than about 5 million records.
Here's my configuration:
I'm connecting to a SQL Server database using the sqljdbc driv
hould apply here too -- try disabling
>> autoCommit and turn-off autowarming and see if it helps.
>>
>> On Wed, Jun 25, 2008 at 5:53 AM, wojtekpia <[EMAIL PROTECTED]> wrote:
>>
>>>
>>> I'm trying to load ~10 million records into Solr using the
>
It looks like that was the problem. With responseBuffering=adaptive, I'm able
to load all my data using the sqljdbc driver.
--
View this message in context:
http://www.nabble.com/DataImportHandler-running-out-of-memory-tp18102644p18119732.html
Sent from the Solr - User mailing list archive at Na
If I know that condition C will eliminate more results than either A or B,
does specifying the query as: "C AND A AND B" make it any faster (than the
original "A AND B AND C")?
--
View this message in context:
http://www.nabble.com/Search-query-optimization-tp17544667p18205504.html
Sent from the
I have a numeric field that I'm using for getting more records like the
current one. Does the MoreLikeThisHandler do numeric comparisons on numeric
fields (e.g. 4 is "similar" to 5), or is it a string comparison?
--
View this message in context:
http://www.nabble.com/%22Similarity%22-of-numbers-
I stored 2 copies of a single field: one as a number, the other as a string.
The MLT handler returned the same documents regardless of which of the 2
fields I used for similarity. So to answer my own question, the
MoreLikeThisHandler does not do numeric comparisons on numeric fields.
--
View this
I didn't realize that subsets were used to evaluate similarity. From your
example, I assume that the strings: 456 and 123456 are "similar". If I store
them as integers instead of strings, will Solr/Lucene still use subsets to
assign similarity?
--
View this message in context:
http://www.nabbl
Thanks.
Can I search for fields using the luke handler? I'd like to be able to say
something like:
solr/admin/luke?fl=a*
where the '*' is a wildcard not necessarily related to dynamic fields. I
will have at least a few hundred dynamic fields, so I'd rather not load all
fields into memory in th
I have two questions:
1. I am pulling data from 2 data sources using the DIH. I am using the
deltaQuery functionality. Since the data sources pull data sequentially, I
find that some data is getting unnecessarily re-indexed from my second data
source. Hopefully this helps illustrate my probem:
A
Does setting termVectors to true affect faceting speed on a field? I changed
a field definition from:
to:
And I see a significant performance improvement (~6x faster). MyFacetField
has ~25,000 unique values. Does it make sense that this change caused the
improvement? I made several other cha
I'd like to have a handler that 1) executes a query, 2) provides spelling
suggestions for incorrectly spelled words, and 3) if the original query
returns 0 results, return results based on the spell check suggestions.
1 & 2 are straight forward using the SpellCheckComponent, but I can't figure
ou
I have 2 fields which will sometimes contain the same data. When they do
contain the same data, am I paying the same performance cost as when they
contain unique data? I think the real question here is: does Lucene index
values per field, or per document?
--
View this message in context:
http://
When using the MoreLikeThisHandler with facets turned on, the facets show
counts of things that are more like my original document. When I use the
MoreLikeThisComponent, the facets show counts of things that match my
original document (I'm querying by document ID), so there is only one
result, and
I have a custom row transformer that I'm using with the DataImportHandler.
When I try to create a dynamic field from my transformer, it doesn't get
created.
If I do exactly the same thing from my dataimport handler config file, it
works as expected.
Has anyone experienced this? I'm using a nigh
e (if you know it beforehand)
> to
> your data config and use the Transformer to set the value. If you don't
> know
> the field name before hand then this will not work for you.
>
> On Sat, Aug 30, 2008 at 1:31 AM, wojtekpia <[EMAIL PROTECTED]> wrote:
>
>>
>
Thanks Hoss. I created SOLR 760:
https://issues.apache.org/jira/browse/SOLR-760
hossman wrote:
>
>
> : When using the MoreLikeThisHandler with facets turned on, the facets
> show
> : counts of things that are more like my original document. When I use the
> : MoreLikeThisComponent, the facets
I would like to use (abuse?) the dataimporter.last_index_time variable in my
full-import query, but it looks like that variable is only set when running
a delta-import.
My use case:
I'd like to use a stored procedure to manage how data is given to the
DataImportHandler so I can gracefully handle
I created a JIRA issue for this and attached a patch:
https://issues.apache.org/jira/browse/SOLR-768
wojtekpia wrote:
>
> I would like to use (abuse?) the dataimporter.last_index_time variable in
> my full-import query, but it looks like that variable is only set when
> running a
Make sure the fields you're trying to highlight are stored in your schema
(e.g. )
David Snelling-2 wrote:
>
> Ok, I'm very frustrated. I've tried every configuraiton I can and
> parameters
> and I cannot get fragments to show up in the highlighting in solr. (no
> fragments at the bottom or hig
ied on
>
> stored="true"/>
> compressed="true"/>
>
>
>
> On Tue, Sep 23, 2008 at 1:59 PM, wojtekpia <[EMAIL PROTECTED]> wrote:
>
>>
>> Make sure the fields you're trying to highlight are stored in your schema
&g
ot sure why it's not working. We use
> this live and do very complex queries including facets that work fine.
>
> www.donorschoose.org
>
>
>
> On Tue, Sep 23, 2008 at 2:20 PM, wojtekpia <[EMAIL PROTECTED]> wrote:
>
>>
>> Try a query where you're sure
ad of string?
>
>
> Thank you very much for the help by the way.
>
>
> On Tue, Sep 23, 2008 at 2:49 PM, wojtekpia <[EMAIL PROTECTED]> wrote:
>
>>
>> Your fields are all of string type. String fields aren't tokenized or
>> analyzed, so you have to
I've been running load tests over the past week or 2, and I can't figure out
my system's bottle neck that prevents me from increasing throughput. First
I'll describe my Solr setup, then what I've tried to optimize the system.
I have 10 million records and 59 fields (all are indexed, 37 are stored
If so, it isn't set large enough to handle the faceting
> you're doing.
>
> Erik
>
>
> On Nov 4, 2008, at 8:01 PM, wojtekpia wrote:
>
>>
>> I've been running load tests over the past week or 2, and I can't
>> figure out
>>
Where is the alt directory in the source tree (or what is the JIRA issue
number)? I'd like to apply this patch and re-run my tests.
Does changing the lockType in solrconfig.xml address this issue? (My
lockType is the default - single).
markrmiller wrote:
>
> The latest alt directory patch uses
iginal Message-
> From: wojtekpia [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, November 05, 2008 8:15 AM
> To: solr-user@lucene.apache.org
> Subject: Re: Throughput Optimization
>
>
> Yes, I am seeing evictions. I've tried setting my filterCache higher,
> but
> then I
I'll try changing my other caches to LRUCache and observe performance.
Interestingly, the FastLRUCache has given me a ~10% increase in performance,
much lower than I've read on the SOLR-667 thread.
Would compressing some of my stored fields significantly improve
performance? Most of my stored fie
I'd like to integrate this improvement into my deployment. Is it just a
matter of getting the latest Lucene jars (Lucene nightly build)?
Yonik Seeley wrote:
>
> You're probably hitting some contention with the locking around the
> reading of index files... this has been recently improved in Luc
Is there a configurable way to switch to the previous implementation? I'd
like to see exactly how it affects performance in my case.
Yonik Seeley wrote:
>
> And if you want to verify that the new faceting code has indeed kicked
> in, some statistics are logged, like:
>
> Nov 24, 2008 11:14:32
Definitely, but it'll take me a few days. I'll also report findings on
SOLR-465. (I've been on holiday for a few weeks)
Noble Paul നോബിള് नोब्ळ् wrote:
>
> wojtek, you can report back the numbers if possible
>
> It would be nice to know how the new impl performs in real-world
>
>
>
--
Vi
It looks like file locking was the bottleneck - CPU usage is up to ~98% (from
the previous peak of ~50%). I'm running the trunk code from Dec 2 with the
faceting improvement (SOLR-475) turned off. Thanks for all the help!
Yonik Seeley wrote:
>
> FYI, SOLR-465 has been committed. Let us know if
.
I described my deployment scenario in an earlier post:
http://www.nabble.com/Throughput-Optimization-td20335132.html
Does it sound like the new faceting algorithm could be the culprit?
wojtekpia wrote:
>
> Definitely, but it'll take me a few days. I'll also report findin
New faceting stuff off because I'm encountering some problems when I turn it
on, I posted the details:
http://www.nabble.com/new-faceting-algorithm-td20674902.html#a20840622
Yonik Seeley wrote:
>
> On Thu, Dec 4, 2008 at 1:54 PM, wojtekpia <[EMAIL PROTECTED]> wrote:
>
Yonik Seeley wrote:
>
>
> Are you doing commits at any time?
> One possibility is the caching mechanism (weak-ref on the
> IndexReader)... that's going to be changing soon hopefully.
>
> -Yonik
>
No commits during this test. Should I start looking into my heap size
distribution and garbage
I've updated my deployment to use NIOFSDirectory. Now I'd like to confirm
some previous results with the original FSDirectory. Can I turn it off with
a parameter? I tried:
java
-Dorg.apache.lucene.FSDirectory.class=org.apache.lucene.store.FSDirectory
...
but that didn't work.
--
View this mes
I've seen some strangle results in the last few days of testing, but this one
flies in the face of everything I've read on this forum: Reducing
filterCache size has increased performance.
I have posted my setup here:
http://www.nabble.com/Throughput-Optimization-td20335132.html.
My original fil
Reducing the amount of memory given to java slowed down Solr at first, then
quickly caused the garbage collector to behave badly (same issue as I
referenced above).
I am using the concurrent cache for all my caches.
--
View this message in context:
http://www.nabble.com/Smaller-filterCache-giv
It looks like my filterCache was too big. I reduced my filterCache size from
700,000 to 20,000 (without changing the heap size) and all my performance
issues went away. I experimented with various GC settings, but none of them
made a significant difference.
I see a 16% increase in throughput by a
I'm running load tests against my Solr instance. I find that it typically
takes ~10 minutes for my Solr setup to "warm-up" while I throw my test
queries at it. Also, I have the same two warm-up queries specified for the
firstSearcher and newSearcher event listeners.
I'm now benchmarking the affe
Sorry, I forgot to include that. All my autowarmcount's are set to 0.
Feak, Todd wrote:
>
> First suspect would be Filter Cache settings and Query Cache settings.
>
> If they are auto-warming at all, then there is a definite difference
> between the first start behavior and the post-commit beh
I use my warm up queries to fill the field cache (or at least that's the
idea). My filterCache hit rate is ~99% & queryResultCache is ~65%.
I update my index several times a day with no 'optimize', and performance is
seemless. I also update my index once nightly with an 'optimize', and that's
wh
I'm optimizing because I thought I should. I'll be updating my index
somewhere between every 15 minutes, and every 2 hours. That means between 12
and 96 updates per day. That seems like a lot of index files (and it scared
me a little), so that's my second reason for wanting to optimize nightly.
I
I have set up cron jobs that update my index every 15 minutes. I have a
distributed setup, so the steps are:
1. Update index on indexer machine (and possibly optimize)
2. Invoke snapshooter on indexer
3. Invoke snappuller on searcher
4. Invoke snapinstaller on searcher.
These updates are small, d
I have a transient SQL table that I use to load data into Solr using the
DataImportHandler. I run an update every 15 minutes
(dataimport?command=full-import&clean=false&optimize=false), but my table
will frequently have no new data for me to import. When the table contains
no data, it looks like S
Thanks Shalin, a short circuit would definitely solve it. Should I open a
JIRA issue?
Shalin Shekhar Mangar wrote:
>
> I guess Data Import Handler still calls commit even if there were no
> documents created. We can add a short circuit in the code to make sure
> that
> does not happen.
>
--
I'm intermittently experiencing severe performance drops due to Java garbage
collection. I'm allocating a lot of RAM to my Java process (27GB of the 32GB
physically available). Under heavy load, the performance drops approximately
every 10 minutes, and the drop lasts for 30-40 seconds. This coinci
Created SOLR 974: https://issues.apache.org/jira/browse/SOLR-974
--
View this message in context:
http://www.nabble.com/Performance-Hit-for-Zero-Record-Dataimport-tp21572935p21588634.html
Sent from the Solr - User mailing list archive at Nabble.com.
I'm using a recent version of Sun's JVM (6 update 7) and am using the
concurrent generational collector. I've tried several other collectors, none
seemed to help the situation.
I've tried reducing my heap allocation. The search performance got worse as
I reduced the heap. I didn't monitor the gar
(Thanks for the responses)
My filterCache hit rate is ~60% (so I'll try making it bigger), and I am CPU
bound.
How do I measure the size of my per-request garbage? Is it (total heap size
before collection - total heap size after collection) / # of requests to
cause a collection?
I'll try your
I'm experiencing similar issues. Mine seem to be related to old generation
garbage collection. Can you monitor your garbage collection activity? (I'm
using JConsole to monitor it:
http://java.sun.com/developer/technicalArticles/J2SE/jconsole.html).
In my system, garbage collection usually doesn'
I'm not sure if you suggested it, but I'd like to try the IBM JVM. Aside from
setting my JRE paths, is there anything else I need to do run inside the IBM
JVM? (e.g. re-compiling?)
Walter Underwood wrote:
>
> What JVM and garbage collector setting? We are using the IBM JVM with
> their concurre
The type of garbage collector definitely affects performance, but there are
other settings as well. There's a related thread currently discussing this:
http://www.nabble.com/Performance-%22dead-zone%22-due-to-garbage-collection-td21588427.html
hbi dev wrote:
>
> Hi wojtekpia,
I profiled our application, and GC is definitely the problem. The IBM JVM
didn't change much. I'm currently looking into ways of reducing my memory
footprint.
--
View this message in context:
http://www.nabble.com/Performance-%22dead-zone%22-due-to-garbage-collection-tp21588427p21758001.html
S
Has anyone tried Solr on the Sun Java Real-Time JVM
(http://java.sun.com/javase/technologies/realtime/index.jsp)? I've read that
it includes better control over the garbage collector.
Thanks.
Wojtek
--
View this message in context:
http://www.nabble.com/Solr-on-Sun-Java-Real-Time-System-tp2175
I noticed your wiki post about sorting with a function query instead of the
Lucene sort mechanism. Did you see a significantly reduced memory footprint
by doing this? Did you reduce the number of fields you allowed users to sort
by?
Lance Norskog-2 wrote:
>
> Sorting creates a large array with
Is an easy way to choose/create an alternate sorting algorithm? I'm
frequently dealing with large result sets (a few million results) and I
might be able to benefit domain knowledge in my sort.
--
View this message in context:
http://www.nabble.com/Custom-Sorting-Algorithm-tp21837721p21837721.ht
During full garbage collection, Solr doesn't acknowledge incoming requests.
Any requests that were received during the GC are timestamped the moment GC
finishes (at least that's what my logs show). Is there a limit to how many
requests can queue up during a full GC? This doesn't seem like a Solr
s
That's not quite what I meant. I'm not looking for a custom comparator, I'm
looking for a custom sorting algorithm. Is there a way to use quick sort or
merge sort or... rather than the current algorithm? Also, what is the
current algorithm?
Otis Gospodnetic wrote:
>
>
> You can use one of the
Ok, so maybe a better question is: should I bother trying to change the
"sorting" algorithm? I'm concerned that with large data sets, sorting
becomes a severe bottleneck (this is an assumption, I haven't profiled
anything to verify). Does it become a severe bottleneck? Do you know if
alternate sor
Luckily, I'm still hitting my
performance requirements, so I'm able to accept that.
Thanks for the tips!
Wojtek
yonik wrote:
>
> On Tue, Feb 3, 2009 at 11:58 AM, wojtekpia wrote:
>> I noticed your wiki post about sorting with a function query instead of
>> the
>
I tried sorting using a function query instead of the Lucene sort and found
no change in performance. I wonder if Lance's results are related to
something specific to his deployment?
--
View this message in context:
http://www.nabble.com/Performance-%22dead-zone%22-due-to-garbage-collection-tp21
In my schema I have two copies of my numeric fields: one with the original
value (used for display, sort), and one with a rounded version of the
original value (used for range queries).
When I use my rounded field for numeric range queries (e.g.
q=RoundedValue:[100 TO 1000]), I see very consisten
Has there been a recent change (since Dec 2/08) in the paging algorithm? I'm
seeing much worse performance (75% drop in throughput) when I request 20
records starting at record 180 (page 10 in my application).
Thanks.
Wojtek
--
View this message in context:
http://www.nabble.com/Recent-Paging-
I'll run a profiler on new and old code and let you know what I find.
I have changed my schema between tests: I used to have termVectors turned on
for several fields, and now they are always off. My underlying data has not
changed.
--
View this message in context:
http://www.nabble.com/Recent-P
Yes, I commit roughly every 15 minutes (via a data update). This update is
consistent between my tests, and only causes a performance drop when I'm
sorting on fields with many unique values. I've examined my GC logs, and
they are also consistent between my tests.
Otis Gospodnetic wrote:
>
> Hi
This was a false alarm, sorry. I misinterpreted some results.
wojtekpia wrote:
>
> Has there been a recent change (since Dec 2/08) in the paging algorithm?
> I'm seeing much worse performance (75% drop in throughput) when I request
> 20 records starting at record 1
I'm using the DataImportHandler to load data. I created a custom row
transformer, and inside of it I'm reading a configuration file. I am using
the system's solr.solr.home property to figure out which directory the file
should be in. That works for a single-core deployment, but not for
multi-core
Thanks Shalin. I think you missed the call to .getResourceLoader(), so it
should be:
context.getSolrCore().getResourceLoader().getInstanceDir()
Works great, thanks!
Shalin Shekhar Mangar wrote:
>
>
> You can use Context.getSolrCore().getInstanceDir()
>
>
--
View this message in context:
Is there a recommended unix flavor for deploying Solr on? I've benchmarked my
deployment on Red Hat. Our operations team asked if we can use FreeBSD
instead. Assuming that my benchmark numbers are consistent on FreeBSD, is
there anything else I should watch out for?
Thanks.
Wojtek
--
View this
Thanks Otis. Do you know what the most common deployment OS is? I couldn't
find much on the mailing list or http://wiki.apache.org/solr/PublicServers
Otis Gospodnetic wrote:
>
>
> You should be fine on either Linux or FreeBSD (or any other UNIX flavour).
> Running on Solaris would probably gi
I'm running Solr on Tomcat 6.0.18 with Java 6 update 7 on Windows 2003 64
bit. Over the past month or so, my JVM has crashed twice with the error
below. Has anyone experienced this? My system is not heavily loaded, and the
crash seems to coincide with an update (via DIH). I'm running trunk code
fr
I have an index of product names. I'd like to sort results so that entries
starting with the user query come first.
E.g.
q=kitchen
Results would sort something like:
1. kitchen appliance
2. kitchenaid dishwasher
3. fridge for kitchen
It looks like using a query Function Query comes close, but
Hi,
I'm importing data using the DIH. I manage all my data updates outside of
Solr, so I use the full-import command to update my index (with
clean=false). Everything works fine, except that I can't delete documents
easily using the DIH. I noticed the preImportDeleteQuery attribute, but
doesn't se
I'm using full-import, not delta-import. I tried it with delta-import, and it
would work, except that I'm querying for a large number of documents so I
can't afford the cost of deltaImportQuery for each document.
It sounds like $deleteDocId will work. I just need to update from 1.3 to
trunk. Than
I updated to Java 6 update 13 and have been running problem free for just
over a month. I'll continue this thread if I run into any problems that seem
to be related.
Yonik Seeley-2 wrote:
>
> I assume that you're not using any Tomcat native libs? If you are,
> try removing them... if not (and
1 - 100 of 118 matches
Mail list logo