Same in Windows. just plain text files, no metadata, no headers.
alexei martchenko
Facebook <http://www.facebook.com/alexeiramone> |
Linkedin<http://br.linkedin.com/in/alexeimartchenko>|
Steam <http://steamcommunity.com/id/alexeiramone/> |
4sq<https://pt.foursquare.com
false
alexei martchenko
Facebook <http://www.facebook.com/alexeiramone> |
Linkedin<http://br.linkedin.com/in/alexeimartchenko>|
Steam <http://steamcommunity.com/id/alexeiramone/> |
4sq<https://pt.foursquare.com/alexeiramone>| Skype: alexeiramone |
Github <https://git
Even the most non-structured data has to have some breakpoint. I've seen
projects running solr that used to index whole books one document per
chapter plus a synopsis boosted doc. The question here is how you need to
search and match those docs.
alexei martchenko
Facebook
Chrome on Windows reports the latest Heliosearch as probable malware and
asks for a "keep or discard". Norton says everything's ok with that file.
Are you guys aware of this?
alexei martchenko
Facebook <http://www.facebook.com/alexeiramone> |
Linkedin<http://br.linkedin
Just to clarify: the actual url is properly space-escaped?
http://localhost:8983/solr/distrib/select?q=term1%20NOT%20
term2&start=0&rows=0&qt=edismax_basic&debugQuery=true
alexei martchenko
Facebook <http://www.facebook.com/alexeiramone> |
Linkedin<http://br.linkedi
ffer.
DIH looks like a 7-headed dragon first time you see it, but by the end of
the day you'll love it.
alexei martchenko
Facebook <http://www.facebook.com/alexeiramone> |
Linkedin<http://br.linkedin.com/in/alexeimartchenko>|
Steam <http://steamcommunity.com
That's right, Solr doesn't import PDFs as it imports XMLs. You'll need to
use Tikka to import binary/specific file types.
http://tika.apache.org/1.4/formats.html
alexei martchenko
Facebook <http://www.facebook.com/alexeiramone> |
Linkedin<http://br.linkedin.com/in/alexei
I've been using DIH to import large Databases to XML file batches and It's
blazing fast.
alexei martchenko
Facebook <http://www.facebook.com/alexeiramone> |
Linkedin<http://br.linkedin.com/in/alexeimartchenko>|
Steam <http://steamcommunity.com/id/alexeiramone/> |
4
o its
internal loops issue a commit after X loops and/or when it finishes
processing.
alexei martchenko
Facebook <http://www.facebook.com/alexeiramone> |
Linkedin<http://br.linkedin.com/in/alexeimartchenko>|
Steam <http://steamcommunity.com/id/alexeiramone/> |
4sq<https://pt.f
Why don't you set both solrconfig commits to very high values and issue a
commit command in sparsed, small updates?
I've been doing this for ages and works perfecly for me.
alexei martchenko
Facebook <http://www.facebook.com/alexeiramone> |
Linkedin<http://br.linkedin.com/
I believe its not possible to facet only the page you are, facet is
supposed to work only with the full resultset. I never tried but i've never
seen a way this could be done.
alexei martchenko
Facebook <http://www.facebook.com/alexeiramone> |
Linkedin<http://br.linkedin.com/in/al
n the middle of some
paragraph. Sometimes it works beautifully, sometimes it misleads you to
parse urls shortened with ellipsis in the middle.
alexei martchenko
Facebook <http://www.facebook.com/alexeiramone> |
Linkedin<http://br.linkedin.com/in/alexeimartchenko>|
Steam <http:/
2) There are some synonym lists on the web, they aren't always complete but
I keep analyzing fields and tokens in order to polish my synonyms. And I
like to use tools like http://www.visualthesaurus.com/ to aid me.
Hope this helps :-)
alexei martchenko
Facebook <http://www.face
CoreB with the same schema and hammer CoreA with
updates and commits and optmizes, they make it available for searches while
hammering CoreB. Then swap again. This produces faster searches.
alexei martchenko
Facebook <http://www.facebook.com/alexeiramone> |
Linkedin<http://br.linked
ome don't.
> Can I achieve that when searching for organisations even if I have a match
> on their name I will show first those which have a website.
>
> Thank you.
>
> Regards,
> Zoltan
>
--
*Alexei Martchenko* | *CEO* | Superdownloads
ale...@superdownloads.com
If you don't need date-specific functions and/or faceting, you can store it
as a int, like 20110914 and parse it in your application
but I don't recommend... as a rule of thumb, dates should be stored as
dates, the millenium bug (Y2K bug) was all about 'saving some space'
remember?
above returns 0 matching documents...
>
> anybody has any ideas on this? could it be because of encoding issue?
>
> -
> Zeki ama calismiyor... Calissa yapar...
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Solr-and-Encoding-Issue-tp3303627
these files in result of any
> search operation.
>
> I am not aware of how Solr works for searching images, I mean it is content
> based or meta data based .. I am not sure.
>
> If any of you have done Image Searches with Solr , I request you to please
> help me out with this.
&
t; Ganesh.
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/how-can-we-do-the-solr-scheduling-in-windows-o-s-tp3303679p3303679.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
--
*Alexei Martchenko* | *CEO* | Superdownl
> clooney"&mm=48%25&debugQuery=**off&indent=on&start=&rows=10
> If I want to factor in score/date (called creationdate)...
>
> recip(ms(NOW/HOUR,**creationdate),3.16e-11,1,1).
>
> Help! and thanks so much for any examples or help..
> -Craig
get 5 ranges for 100 docs with 20 docs in each range, 6 ranges for
> 200 docs = 34 docs in each field, etc.
>
> Is it possible with solr?
>
>
--
*Alexei Martchenko* | *CEO* | Superdownloads
ale...@superdownloads.com.br | ale...@martchenko.com.br | (11)
5083.1018/5080.3535/5080.3533
I'm printing a big bold cheatsheet about it and stickin' it everywhere :-)
I wish I could change this thread's subject to "alexei is not working
properly" :-/
2011/8/30 Erick Erickson
> Yep, that one takes a while to figure out, then
> I wind up re-figuring it ou
pecial rule for 2 terms just add:
1<1 2<50% 6<-60%
MORE THAN ONE clauses (2) should match 1.
NOW this makes sense!
2011/8/30 Alexei Martchenko
> Anyone else strugglin' with dismax's MM parameter?
>
> We're having a problem here, seems that configs from 3 terms a
in
it.
I'd like to accomplish something like this:
2<1 3<2 4<3 8<-50%
translating: 1 or 2 -> 1 term, 3 at least 2, 4 at least 3 and 5, 6, 7, 8
terms at least half rounded up (5->3, 6->3, 7->4, 8->4)
seems that he's only using 1 and 2 clauses.
thanks in advance
alexei
1 |
> > | 10100039 |33113319 | 1537370 | 1 |
> > | 10100040 | 331100 |1580 | 1 |
> > | 10100040 | 331694 | 1540230 | 1 |
> > | 10100040 |33113319 | 1537370 | 1 |
> > +---+-+-++
> >
> > Thanks!
> >
>
--
*Alexei Martchenko* | *CEO* | Superdownloads
ale...@superdownloads.com.br | ale...@martchenko.com.br | (11)
5083.1018/5080.3535/5080.3533
the Solr - User mailing list archive at Nabble.com.
>
--
*Alexei Martchenko* | *CEO* | Superdownloads
ale...@superdownloads.com.br | ale...@martchenko.com.br | (11)
5083.1018/5080.3535/5080.3533
message in context:
> http://lucene.472066.n3.nabble.com/getting-old-records-in-database-tp3288991p3288991.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
--
*Alexei Martchenko* | *CEO* | Superdownloads
ale...@superdownloads.com.br | ale...@martchenko.com.br | (11)
5083.1018/5080.3535/5080.3533
t;protwords.txt"/>
>
>
>
>
>
> words="stopwords.txt"/>
> generateWordParts="1" generateNumberParts="1" catenateWords="0"
> catenateNumbers="0" catenateAll=&
okenizerFactory" ignoreCase="true"
> expand="true"/>
>
> Doesn't seem to matter which tokenizer I use.This must be something
> simple that I'm not doing but am a bit stumped at the moment and would
> appreciate any tips.
> Thanks
> Gary
ere a way in solr to extend the simple year to
> 2008-01-01T00:00:**00Z. Or, do i have to solve
> the problem in preprocessing, before posting?
>
> Thanks
> Oliver
>
--
*Alexei Martchenko* | *CEO* | Superdownloads
ale...@superdownloads.com.br | ale...@martchenko.com.br | (11)
5083.1018/5080.3535/5080.3533
s mostly ancient content:
>>
>> http://wiki.apache.org/solr/**HierarchicalFaceting<http://wiki.apache.org/solr/HierarchicalFaceting>
>>
>> - Naomi
>>
>
>
--
*Alexei Martchenko* | *CEO* | Superdownloads
ale...@superdownloads.com.br | ale...@martchenko.com.br | (11)
5083.1018/5080.3535/5080.3533
;
>
> Thanks in advance
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/how-can-i-develop-client-application-with-solr-url-using-javascript-tp3275506p3275506.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
--
*Alexei Mart
etd my code for now. But
> can
> try it once again and post the exception that I have been getting while
> crawling.
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/How-to-implement-Spell-Checker-using-Solr-tp3268450p3274069.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
--
*Alexei Martchenko* | *CEO* | Superdownloads
ale...@superdownloads.com.br | ale...@martchenko.com.br | (11)
5083.1018/5080.3535/5080.3533
*stored* fields rather than what's actually in
> the inverted index.
> The TermsComponent can help here:
> http://wiki.apache.org/solr/TermsComponent
>
> Erick
>
> On Mon, Aug 22, 2011 at 11:28 AM, Alexei Martchenko
> wrote:
> > That very txt said "A Spanish s
rces/org/apache/lucene/analysis/br/stopwords.txt
2011/8/22 Alexei Martchenko
> Funny thing is that stopwords files in the examples shown in
> http://wiki.apache.org/solr/LanguageAnalysis#Spanish are actually using
> pipe and other terms. See the spanish one in
> http://svn.apache.org
t; los| the, them
> > del | de + el
> > se | himself, from him etc
> > las| the, them
> > por| for, by, etc
> > un | a
> > para | for
> > con| with
> > no | no
> > una| a
> > su | his, her
> > al | a + el
> > | es from SER
> > lo | him
> >
> >
> > Any idea? Thanks!
> >
>
--
*Alexei Martchenko* | *CEO* | Superdownloads
ale...@superdownloads.com.br | ale...@martchenko.com.br | (11)
5083.1018/5080.3535/5080.3533
Hi Koji, thanks, it's loading right now. Can't say it's really working
though, but I believe those are other issues with FastVectorHighlighter
2011/8/18 Koji Sekiguchi
> (11/08/19 4:14), Alexei Martchenko wrote:
>
>> Hi Koji thanks for the reply.
>>
>> My
Hi Koji thanks for the reply.
My is defined directly in . SOLR 3.3 warns me
is a deprecated form do you think it is in the wrong
place?
2011/8/17 Koji Sekiguchi
> Alexei,
>
> From the log, I think Solr couldn't find colored fragmentsBuilder defined
> in solrconfig.xml
this message in context:
> http://lucene.472066.n3.nabble.com/suggester-issues-tp3262718p3265803.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
--
*Alexei Martchenko* | *CEO* | Superdownloads
ale...@superdownloads.com.br | ale...@martchenko.com.br | (11)
5083.1018/5080.3535/5080.3533
Good knowledge for everybody, those little mistakes like spaces, typos and
lack of commas makes lose so many time. thanks for posting this.
2011/8/18 Mike Mander
> Solution found.
> The original solr-config.xml jarowinkler definition had some line breaks.
> If i write the difinition in one line
>
> synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
> words="stopwords.txt"/>
>
>
>
>
>
> indexed="false" sto
generate more
> than one collation query. Is there something simple that I have overlooked?
>
--
*Alexei Martchenko* | *CEO* | Superdownloads
ale...@superdownloads.com.br | ale...@martchenko.com.br | (11)
5083.1018/5080.3535/5080.3533
to solrconfig.xml just anywhere and that
> will
> wrk?
>
> thks very much for your help
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Spell-Checker-tp1914336p3262744.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
ml just anywhere and that
> will
> wrk?
>
> thks very much for your help
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Spell-Checker-tp1914336p3262744.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
--
*Alexei Ma
d to do.
Any thoughts?
2011/8/17 Alexei Martchenko
> I have the very very very same problem. I could copy+paste your message as
> mine. I've discovered so far that bigger dictionaries work better for me,
> controlling threshold is much better than avoid indexing one or twio fields.
ons return phrases, for
> 'ne' I will get 'new york' and 'new year', but for 'new y' I will get
> nothing. Also, for 'y' I will get nothing, so the issue remains.
>
> If someone has some experience working with the Suggester, or if someone
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Spell-Checker-tp1914336p3262684.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
--
*Alexei Martchenko* | *CEO* | Superdownloads
ale...@superdownloads.com.br | ale...@martchenko.com.br | (11)
5083.1018/5080.3535/5080.3533
ackaging%20Material,%20Supplies&qt=dismax&qf=category
>
> ^4.0&qf=keywords^2.0&qf=title^2.0&qf=smalldesc&qf=companyname&qf=usercategory&qf=usrpcatdesc&qf=city&qs=10&pf=category^4.0&pf=keywords^3&pf=title^3&pf=smalldesc^1.5&pf=comp
Hi Mike, is your config like this?
Is queryAnalyzerFieldType matching your type of field to be indexed?
Is the field correct?
textSpell
jarowinkler
sear_spellterms
false
true
org.apache.lucene.search.spell.JaroWinklerDistance
./spellchecker_jarowinkler
2011/8/17 Mike Mander
> Hello,
>
> i g
ymlinked form my config
> folder -- I like to keep my configurations files organized so they can be
> managed by git)
>
> `start.jar` is in `usr/share/jetty/start.jar`.
>
>
> On Tuesday, 16 August, 2011 at 1:33 PM, Alexei Martchenko wrote:
>
> > AFAIK you're still se
sts no cores)
> /solr/live/admin/ does not -- 404
>
>
> On Tuesday, 16 August, 2011 at 1:13 PM, Alexei Martchenko wrote:
>
> > Lets try something simplier.
> > My start.jar is on \apache-solr-3.3.0\example\
> > Here's my local config placed in \apache-solr-3.3.0\
>
>
>
>
>
>
> Finally, looking through the logs produced by Jetty doesn't seem to reveal
> any clues about what is wrong. There doesn't seem to be any errors in there,
> except the 404s.
>
> Long story short. I'm stuck. Any suggestions on where to go with this?
>
> David
>
>
--
*Alexei Martchenko* | *CEO* | Superdownloads
ale...@superdownloads.com.br | ale...@martchenko.com.br | (11)
5083.1018/5080.3535/5080.3533
t; > boost type B documents so that they're more likely to be
> > represented than
> > other types).
> >
> > Anyone know if there's a way to do something like this in
> > Solr?
>
> Sounds like you want to achieve diversity of results.
>
> Consider using h
ragmentsBuilder> respects
hl.tag.pre/post parameters:
--
*Alexei*
Yea sorry for not helping much, but while I was comparing solr to verity
i've found several docs specific for users having trouble migrating from
verity to solr. They might not be api-specific but give some clue.
2011/8/15 Arcadius Ahouansou
> Hi Alexei.
> I had a quick look and it
omething similar.
>
> Ideally, we would like to keep most of our legacy code unchanged and have a
> kind of query-translation-layer plugged into our app if possible.
>
> -Is there lib available?
>
> -Any thought?
>
> Thanks.
>
> Arcadius.
>
--
*Ale
t;indie music" anywhere in
> their
> > > >>
> > > >> data.
> > > >>
> > > >>> Does termfreq not support phrases?
> > > >>
> > > >> No, it is TERM frequency and indie music is not one term. I don't
BC , aBc,AbC and all the
> cases.
>
>
>
>
> Thank u in advance
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/how-to-ignore-case-in-solr-search-field-tp3242967p3242967.html
> Sent from the Solr - User mailing list archiv
t;>> catenateNumbers="0" catenateAll="0" splitOnCaseChange="1"/>
> > >>>>>>>
> > >>>>>>>> class="solr.**LowerCaseFilterFactory"/>
> > >>>>>>>
> > >>>>>>>
> > >>>>>>>> >>>>>>> class="solr.**HTMLStripCharFilterFactory"/>
> > >>>>>>>
> > >>>>>>>
> > >>>>>>>
> > >>>>>>>
> > >>>>>>>
> > >>>>>>> Unfortunatelly that did not fix the error. There are still
> > tags
> > >>>>>>> inside the data. Although I believe there are viewer then before
> > but I
> > >>>>>>> can not prove that. Fact is, there are still html tags inside the
> > >>>>>>> data.
> > >>>>>>>
> > >>>>>>> Any other ideas what the problem could be?
> > >>>>>>>
> > >>>>>>>
> > >>>>>>>
> > >>>>>>>
> > >>>>>>>
> > >>>>>>> 2011/7/25 Markus Jelsma > markus.jel...@openindex.io>
> > >>>>>>> >
> > >>>>>>>
> > >>>>>>>
> > >>>>>>>
> > >>>>>>>> You've three analyzer elements, i wonder what that would do. You
> > need
> > >>>>>>>> to add
> > >>>>>>>> the char filter to the index-time analyzer.
> > >>>>>>>>
> > >>>>>>>> On Monday 25 July 2011 13:09:14 Merlin Morgenstern wrote:
> > >>>>>>>>
> > >>>>>>>>
> > >>>>>>>>> Hi there,
> > >>>>>>>>>
> > >>>>>>>>> I am trying to strip html tags from the data before adding the
> > >>>>>>>>> documents
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>> to
> > >>>>>>>>
> > >>>>>>>>
> > >>>>>>>>
> > >>>>>>>>> the index. To do that I altered schem.xml like this:
> > >>>>>>>>> > >>>>>>>>>
> > >>>>>>>>> positionIncrementGap="100" autoGeneratePhraseQueries="**true">
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>>> >>>>>>>>>
> class="solr.**WhitespaceTokenizerFactory"/>
> > >>>>>>>>>> >>>>>>>>> class="solr.**WordDelimiterFilterFactory"
> > >>>>>>>>>
> > >>>>>>>>> generateWordParts="1" generateNumberParts="1" catenateWords="1"
> > >>>>>>>>> catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/>
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>>> >>>>>>>>>
> class="solr.**KeywordMarkerFilterFactory"/>
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>>> >>>>>>>>>
> class="solr.**WhitespaceTokenizerFactory"/>
> > >>>>>>>>>> >>>>>>>>> class="solr.**WordDelimiterFilterFactory"
> > >>>>>>>>>
> > >>>>>>>>> generateWordParts="1" generateNumberParts="1" catenateWords="0"
> > >>>>>>>>> catenateNumbers="0" catenateAll="0" splitOnCaseChange="1"/>
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>>> >>>>>>>>>
> class="solr.**KeywordMarkerFilterFactory"/>
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>>> >>>>>>>>>
> class="solr.**HTMLStripCharFilterFactory"/>
> > >>>>>>>>>
> > >>>>>>>>> > >>>>>>>>>
> > class="solr.**WhitespaceTokenizerFactory"/>
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>>> >>>>>>>>> stored="true"
> > >>>>>>>>>
> > >>>>>>>>> required="false"/>
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>> Unfortunatelly this does not work, the hmtl tags like
> are
> > >>>>>>>>> still
> > >>>>>>>>> present after restarting and reindexing. I also tryed
> > >>>>>>>>> htmlstriptransformer, but this did not work either.
> > >>>>>>>>>
> > >>>>>>>>> Has anybody an idea how to get this done? Thank you in advance
> > for
> > >>>>>>>>> any hint.
> > >>>>>>>>>
> > >>>>>>>>> Merlin
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>> --
> > >>>>>>>> Markus Jelsma - CTO - Openindex
> > >>>>>>>> http://www.linkedin.com/in/**markus17<
> > http://www.linkedin.com/in/markus17>
> > >>>>>>>> 050-8536620 / 06-50258350
> > >>>>>>>>
> > >>>>>>>>
> > >>>>>>>
> > >>>
> > >>
> > >
> >
>
--
*Alexei Martchenko* | *CEO* | Superdownloads
ale...@superdownloads.com.br | ale...@martchenko.com.br | (11)
5083.1018/5080.3535/5080.3533
get a sorted
> list without searching for any terms?
>
--
*Alexei Martchenko* | *CEO* | Superdownloads
ale...@superdownloads.com.br | ale...@martchenko.com.br | (11)
5083.1018/5080.3535/5080.3533
in context:
> http://lucene.472066.n3.nabble.com/German-language-specific-problem-automatic-Spelling-correction-automatic-Synonyms-tp3216278p3216278.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
--
*Alexei Martchenko* | *CEO* | Superdownloads
ale...@superdownloads.com.br | ale...@martchenko.com.br | (11)
5083.1018/5080.3535/5080.3533
ery is get only those documents which have multiple elements for
> that multivalued field.
>
> I.e, doc 2 and 3 should be returned from the above set..
>
> Is there anyway to achieve this?
>
>
> Awaiting reply,
>
> Thanks & Regards,
> Rajani
>
--
*Alexei Martchenko* | *CEO* | Superdownloads
ale...@superdownloads.com.br | ale...@martchenko.com.br | (11)
5083.1018/5080.3535/5080.3533
> record in database.
>
> Note: My indexing logic to get the required data from DB is some what
> complex and involves many tables.
>
> Please suggest me how can I proceed here.
>
> Thanks
> Lateef
>
--
*Alexei Martchenko* | *CEO* | Superdownloads
ale...@super
solr without having to restart tomcat. All you need to do is 'touch'
> the solr.xml in the solr.home directory. It can take a few seconds but solr
> will restart and reload any config.
>
> Cheers
>
> François
>
> On Jul 27, 2011, at 2:56 PM, Alexei Martchenko wrote
ere and I was able to add new document with theses 2
> fields.
>
>
>
> So far, it looks I won't need to re-index all my data. Am I right ? Do I
> need to re-index all my data or in that case I'm fine ?
>
>
>
> Thank you !
>
>
>
> Charles-André Martin
>
>
--
*Alexei Martchenko* | *CEO* | Superdownloads
ale...@superdownloads.com.br | ale...@martchenko.com.br | (11)
5083.1018/5080.3535/5080.3533
e field
if (summaries == null || summaries.length == 0) {
alternateField( docSummaries, params, doc, fieldName );
}
}
This seems to work for my purposes. If nobody has any issues with this code
perhaps it should be a patch?
Thanks,
Alexei
--
View this message in context:
http://lucene.472066.
something
value1 and value3 will be skipped completely. When a field is not
multivalued everything works as advertised.
Any suggestions?
Regards,
Alexei
--
View this message in context:
http://lucene.472066.n3.nabble.com/return-unaltered-complete-multivalued-fields-with-Highlight
different things, nothing seems to work.
my config:
true
1000
abstract
regex
true
104400
0
Regards,
Alexei
--
View this message in context:
http://lucene.472066.n3.nabble.com/return-unaltered-complete-multivalued-fields-with
Thank you for your replies guys.
Personally I like the flatten="true" option.
However there is still one issue with it. After stripping the tags away,
I get "JoeSmith" instead of "Joe Smith" with a space (solr 3.1).
Did anyone run into this issue?
Cheers,
Alexei
-
approach or is there another filter available which will
do what I want? Perhaps something that will strip everything but integers
from the data.
Thank You,
Alexei
--
View this message in context:
http://lucene.472066.n3.nabble.com/Strip-spaces-and-new-line-characters-from-data-tp2795453p2795453
Sorry about bringing an old thread back, I thought my solution could be
useful.
I also had to deal with multiple data sources. If the data source number
could be queried for in one of your parent entities then you could get it
using a variable as follows:
http://lucene.472066.n3.nabble.com/DIH-I
Hi Everyone,
I am having an identical problem with concatenating author's first and last
names stored in an xml blob.
Because this field is multivalued copyfield does not work.
Does anyone have a solution?
Regards,
Alexei
--
View this message in context:
http://lucene.472066.n3.nabbl
Sorry about this post. I'll RTFM more carefully next time.
Resolved:
Regards,
Alexei
--
View this message in context:
http://lucene.472066.n3.nabble.com/DIH-populate-multiple-fields-with-one-c
direction?
Regards,
Alexei
--
View this message in context:
http://lucene.472066.n3.nabble.com/DIH-populate-multiple-fields-with-one-column-tp2367379p2367379.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi Gora,
Unfortunately reorganizing the data is not an option for me.
Multiple databases exist and a third party is taking care of
populating them. Once a database reaches a certain size, a switch
occurs and a new database is created with the same table structure.
Gora Mohanty-3 wrote:
>
> I m
Hi Gora,
Thank you for your reply.
The datasource number is stored in the database.
The parent entity queries for this number and in theory it
should becomes available to the child entity - "Article" in my case.
I am initiating the import via solr/db/dataimport?command=full-import
Script is a
Hi,
I am in a situation where the data needed for one of the fields in my
document
may be sitting in a different datasource each time.
I would like to be able to configure something like this:
http://lucene.472066.n3.nabble.com/Resolve-a-DataImportHandler-datasource-based-on-previous-entity-tp22
77 matches
Mail list logo