Yonik Seeley wrote
> "rollback" is a lucene-level operation that isn't really supported at
> the solr level:
> https://issues.apache.org/jira/browse/SOLR-4733
I find it odd that this unsupported operation has been around since Solr
1.4. In this case, it seems like there is some underlying issue sp
We've noticed that partial updates are not rolling back with subsequent
commits based on the same document id. Our only success in mitigating this
issue has been to issue an empty commit immediately following the rollback.
I've included an example below showing the partial updates unexpected
result
Stefan Matheis-3 wrote
> To me, it sounds more like you shouldn’t have to care about such gory
> details as a user - at all.
>
> would you mind opening a issue on JIRA Todd? Including all the details you
> already provided in as well as a link to this thread, would be best.
>
> Depending on what
It looks like the issue has to do with the Date object. When the document is
fully updated (with the date specified) the field is created with a String
object so everything is indexed as it appears. When the document is
partially updated (with the date omitted) the field is re-created using the
pre
We recently started using atomic updates in our application and have since
noticed that date fields copied to a text field have varying results between
full and partial updates. When the document is fully updated the copied text
date appears as expected (i.e. -MM-dd'T'HH:mm:ss.SSSZ); however, w
James,
I apologize for the late response.
Dyer, James-2 wrote
> With the DIH request, are you specifying "cacheDeletePriorData=false"
We are not specifying that property (it looks like it defaults to "false").
I'm actually seeing this issue when running a full clean/import.
It appears that the
Mikhail Khludnev wrote
> It's worth to mention that for really complex relations scheme it might be
> challenging to organize all of them into parallel ordered streams.
This will most likely be the issue for us which is why I would like to have
the Berkley cache solution to fall back on, if possib
Mikhail Khludnev wrote
> "External merge" join helps to avoid boilerplate caching in such simple
> cases.
Thank you for the reply. I can certainly look into this though I would have
to apply the patch for our version (i.e. 4.8.1). I really just simplified
our data configuration here which actually
We currently index using DIH along with the SortedMapBackedCache cache
implementation which has worked well until recently when we needed to index
a much larger table. We were running into memory issues using the
SortedMapBackedCache so we tried switching to the BerkleyBackedCache but
appear to hav
Erick Erickson wrote
> Have you considered using SolrJ instead of DIH? I've seen
> situations where that can make a difference for things like
> caching small tables at the start of a run, see:
>
> searchhub.org/2012/02/14/indexing-with-solrj/
Nice write-up. I think we're going to move to that ev
Dyer, James-2 wrote
> The DIH Cache feature does not work with delta import. Actually, much of
> DIH does not work with delta import. The workaround you describe is
> similar to the approach described here:
> https://wiki.apache.org/solr/DataImportHandlerDeltaQueryViaFullImport ,
> which in my op
It appears that DIH entity caching (e.g. SortedMapBackedCache) does not work
with deltas... is this simply a bug with the DIH cache support or somehow by
design?
Any ideas on a workaround for this? Ideally, I could just omit the
"cacheImpl" attribute but that leaves the query (using the default pr
Todd Long wrote
> I'm curious as to where the loss of precision would be when using
> "-(Double.MAX_VALUE)" as you mentioned? Also, any specific reason why you
> chose that over Double.MIN_VALUE (sorry, just making sure I'm not missing
> something)?
So, to answ
Chris Hostetter-3 wrote
> ...i mention this as being a workarround for floats/doubles because the
> functions are evaluated as doubles (no "casting" or "forced integer
> context" type support at the moment), so with integer/float fields there
> would be some loss of precision.
Excellent, thank
I'm trying to sort on numeric (e.g. TrieDoubleField) fields and running into
an issue where 0 and NULL values are being compared as equal. This appears
to be the "common case" in the FieldComparator class where the missing value
(i.e. NULL) gets assigned for a 0 value (which is perfectly valid). Is
Sounds good. Thank you for the synonym (definitely will work on this) and
padding suggestions.
- Todd
--
View this message in context:
http://lucene.472066.n3.nabble.com/Wildcard-Regex-Searching-with-Decimal-Fields-tp4206015p4206421.html
Sent from the Solr - User mailing list archive at Nabble
Erick Erickson wrote
> But I _really_ have to go back to one of my original questions: What's
> the use-case?
The use-case is with autocompleting fields. The user might know a frequency
starts with 2 so we want to limit those results (e.g. 2, 23, 214, etc.). We
would still index/store the numeric-
I see what you're saying and that should do the trick. I could index 123 with
an index synonym 123.0. Then my regex query "/123/" should hit along with a
boolean query "123.0 OR 123.00*". Is there a cleaner approach to breaking
apart the boolean query in this case? Right now, outside of Solr, I'm j
Erick Erickson wrote
> No, not using SynonymFilterFactory. Rather take that as a base for a
> custom Filter that
> doesn't use any input file.
OK, I just wanted to make sure I wasn't missing something that could be done
with the SynonymFilterFactory itself. At one time, I started going down this
p
Essentially, we have a grid of data (i.e. frequencies, baud rates, data
rates, etc.) and we allow wildcard filtering on the various columns. As the
user provides input, in a specific column, we simply filter the overall data
by an implicit "starts with" query (i.e. 23 becomes 23*). In most cases,
y
I'm having some normalization issues when trying to search decimal fields
(i.e. TrieDoubleField copied to TextField).
1. Wildcard searching: I created a separate "TextField" field type (e.g.
filter_decimal) which filters whole numbers to have at least one decimal
place (i.e. dot zero) using the pa
21 matches
Mail list logo