7;m running a pretty old version of Solr (5.x). Also, it looks like the
only new files created recently are .liv files, which were created at the
time of deletion, and also a segment_ file.
I'd love some guidance on this.
Thanks,
- A
--
Alex Hanna, PhD
alex-hanna.com
@alexhanna
weight' field defined in schemas as following:
Where "float" defined as:
What does the error means? How can I handle it?
Thanks.
Alex Broitman | Integration Developer
4 HaHarash Street | PO Box 7330 | Hod Hasharon, ISRAEL 45241
F: +972-9-7944333 | C: +972-54-47
Hello
Please add me to solr wiki editor - nickname is Alex Birdman
--
Alex Birdman
Head of PR department of 3dmdb.com
3dmodeldatab...@gmail.com
nd or aws route 53
> Am 25.02.2019 um 08:46 schrieb Jörn Franke :
>
> Elastic ip addresses?
> https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-address
> es-eip.html
>
>> Am 25.02.2019 um 08:22 schrieb Addison, Alex (LNG-LON)
>> :
>>
>>
using Solr Cloud 7.7.
Thanks,
Alex Addison
LexisNexis is a trading name of RELX (UK) LIMITED - Registered office - 1-3
STRAND, LONDON WC2N 5JR
Registered in England - Company No. 02746621
https://www.youtube.com/watch?v=pNe1wWeaHOU&list=PLYI8318YYdkCsZ7dsYV01n6TZhXA6Wf9i&index=1
https://www.youtube.com/watch?v=pNe1wWeaHOU&list=PLYI8318YYdkCsZ7dsYV01n6TZhXA6Wf9i&index=1
http://audiobible.life CHECK IT OUT!
On Wed, Sep 6, 2017 at 5:57 PM, Nick Way wrote:
> Hi, I have a custom fiel
t
the hostname of current Solr instance. I can of course write the
Javascript to handle this task, but maybe there is builtin Velocity
property that I can ask for host & port of current server?
Thank you
--
With best wishes, Alex Ott
http://alexott.net/
Twitter: alexott_en
Oh, shoot, forgot to include my wiki username. Its "AlexYumas" sorry about
that stupid me
On Sat, Oct 31, 2015 at 10:48 PM, Alex wrote:
> Hi,
>
> Please kindly add me to the Solr wiki contributors list. The app we're
> developing (Jitbit Help) is using Apache Solr
Hi,
Please kindly add me to the Solr wiki contributors list. The app we're
developing (Jitbit Help) is using Apache Solr to power our knowledge-base
search engine, customers love it. (we were using MS Fulltext indexing
service before, but it's a huge PITA).
Thanks
My stopwords don't works as expected.
Here is part of my schema:
I am trying to write a custom analyzer , whose execution is determined by
the value of another field within the document.
For example if the locale field in the document has 'de' as the value, then
the analizer would use the German set of tokenizers/filters to process the
value of a field.
My que
We need advanced stop words filter in Solr.
We need stopwords to be stored in db and ability to change them by users
(each user should have own stopwords). That's why I am thinking about
sending stop words to solr from our app or connect to our db from solr and
use updated stop words in custom Sto
type SQL Server
Geometry
wta.WTArea <- does show in the index, is of type varchar(max)
they are defined similarly in schema.xml
Again, thanks for all your help, I appreciate it
Alex Bostic
GIS Developer
URS Corporation
12420 Milestone Center Drive, Suite 150
Germantown, MD 20876
direct
.java:47)
at
org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:118)
So now I think I have to update the parameters in my fieldType definition right?
Alex Bostic
GIS Developer
URS Corporation
12420 Milestone Center Drive, Suite 150
Germantown, MD 20876
direct line: 301-820-3287
cel
apeReadWriterFormat.readShapeOrNull(LegacyShapeReadWriterFormat.java:153)
at
org.apache.solr.schema.AbstractSpatialFieldType.parseShape(AbstractSpatialFieldType.java:167)
my field type is defined as follows
Alex Bostic
GIS Developer
URS Corporation
12420 Milestone Center Drive, Suite 150
Ge
. If not then you’re
current problem is squarely with the import process/config, not with
Solr spatial.
~ David Smiley
Freelance Apache Lucene/Solr Search Consultant/Developer
http://www.linkedin.com/in/davidwsmiley
On Sat, Aug 23, 2014 at 10:53 AM, Bostic, Alex wrote:
> Ok thanks, I am eve
solr-4.9.0\ocsirasspatial\solr-webapp\webapp\WEB-INF\lib
Any other hints are certainly welcome. I think I'm close
Alex Bostic
GIS Developer
URS Corporation
12420 Milestone Center Drive, Suite 150
Germantown, MD 20876
direct line: 301-820-3287
cell line: 301-213-2639
-Original Me
ng my path issues would be great
Alex Bostic
GIS Developer
URS Corporation
12420 Milestone Center Drive, Suite 150
Germantown, MD 20876
direct line: 301-820-3287
cell line: 301-213-2639
-Original Message-
From: Bostic, Alex [mailto:alex.bos...@urs.com]
Sent: Saturday, August 23, 2014 3:53 A
ass.path=c:\AddedSoftware\solr-4.9.0\jts-1.13\lib -jar start.jar
Based on the above, what am I missing to get this to work. Maybe I am
overlooking an issue in the console?
Thanks
Alex Bostic
GIS Developer
URS Corporation
12420 Milestone Center Drive, Suite 150
Germantown, MD 20876
direct lin
Ok Great, I'm just going to dive in and see if I can index my data. Does
spatial reference matter?
Alex Bostic
GIS Developer
URS Corporation
12420 Milestone Center Drive, Suite 150
Germantown, MD 20876
direct line: 301-820-3287
cell line: 301-213-2639
-Original Message-
From:
an provide more detail if needed.
Thanks
Alex Bostic
GIS Developer
URS Corporation
12420 Milestone Center Drive, Suite 150
Germantown, MD 20876
direct line: 301-820-3287
cell line: 301-213-2639
This e-mail and any attachments contain URS Corporation confidential
information that may be proprietar
Hi all,
I have a field that contains dates (it has date type) and I would like
to make a hierarchical (pivot) facet based on that field.
So I would like to have something like this:
date_of_creation:
|__2014
||__January
|| |_01
|| |_02
|| |_14
|
Hi,
All of our synonyms are maintained in DB, we would like to fetch those
synonym dynamically for query expansion (Not indexing time). Are there any
code contribution?
I saw some discussion years ago but without conclusion.
Thanks a lot!
Thanks, Koji. No, we don't have that option set. Should we?
ld be happening? Thanks.
-Alex
ver
works fine for me.
Alex.
The information contained in this email is strictly confidential and for the
use of the addressee only, unless otherwise indicated. If you are not the
intended recipient, please do not read, copy, use or disclose to others this
message or any attachment. Please also
Hi,
I'm running Solr 4.3 embedded in Tomcat, so there's a Solr server starting when
Tomcat starts.
In the same webapp, I also have a process to recreate the Lucene index when
Solr starts. To do this, I have a singleton instance of EmbeddedSolrServer
provided by Spring. This same instance is als
Thanks, Jack. Sorry, took me a while to reply :)
It sounds like sentence/paragraph level searches won't be easy.
Warm regards,
Alex
-Original Message-
From: Jack Krupansky [mailto:j...@basetechnology.com]
Sent: 15 April 2013 5:09 PM
To: solr-user@lucene.apache.org
Subject: Re: Tok
Hi. Is it possible to search within paragraphs or sentences in Solr? The
PatternTokenizerFactory uses regular expressions, but how can this be done with
plain ASCII docs that don't have tags (HTML), yet they're broken into
paragraphs? Thanks.
Warm regards,
Alex
Thanks, Oussama. That was very useful information and we have added the double
quotes. One interesting trick: we had to change the way we did it to wrap the
pattern value in single quotes so we could have double quotes inside.
Warm regards,
Alex Cougarman
Bahá'í World Centre
Haifa, I
ersal Peace"
However, it finds and highlights the word Promulgation but not the word Peace
Here's the field's definition in our schema.xml:
Warm regards,
Alex Cou
Thanks, Erick. That really helped us in learning about tokens and how the
Analyzer works. Thank you!
Warm regards,
Alex
-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com]
Sent: 19 September 2012 3:56 PM
To: solr-user@lucene.apache.org
Subject: Re: Wildcard
unavailable? Could it serve searches?
What happens if it has to replicate huge amount of data?
Regards,
Alex
ate_text:"2010-01-*"
However, when we run these, they return nothing. What are we doing wrong?
date_text:"*-01-27"
date_text:"2010-*-27"
date_text:"2010-01-27*"
Warm regards,
Alex
efficient way to do this?
Thanks.
Warm regards,
Alex
-Original Message-
From: Jack Krupansky [mailto:j...@basetechnology.com]
Sent: 13 September 2012 4:47 PM
To: solr-user@lucene.apache.org
Subject: Re: Partial date searches
Wildcard patterns work on dates if they are "string&quo
where the day isn't known
but the month and year are known. Is this possible? Is there a sample search we
can run in the admin interface against our index? Thanks.
Warm regards,
Alex Cougarman
Bahá'í World Centre
Haifa, Israel
Office: +972-4-835-8683
Cell: +972-54-241-4742
acoug...@bwc.org
dexed? Thanks.
Sincerely,
Alex
there some special
tool for this purpose?)
Regards,
Alex
autoGeneratePhraseQueries is set so that the tokens generated in the query
analyzer behave more like tokens from a space delimited query. So
"ns1.define.logica.com" finds a similar set of documents to "ns1 define logica
com" (i.e. "ns1 AND define AND logica AND com"), rather than "ns1 OR define OR
logica OR com".
Many thanks, Alex
This is contrary to e.g. "ns.logica.define.com" which is treated as a single
token. Is there a way I can make Solr treat both queries the same way?
Many thanks, Alex
--
Alex Willmer | Developer
2 Trinity Park, Birmingham, B37 7ES | United Kingdom
M: +44 7557 752744
al.will...@logica.c
Thanks guys! I'll try out OpenNMS & Zabbix :)
Alex
On 03/14/2012 12:07 AM, Jan Høydahl wrote:
And here is a page on how to wire Solr's JMX info into OpenNMS monitoring tool.
Have not tried it, but as soon as a collector config is defined once I'd guess
it could be re-used,
Hi there,
Yes I know about that tool, however, we've decided that that's not
optimal for us, so i'm looking for something freely available.
Alex
On 03/13/2012 09:15 AM, Rafał Kuć wrote:
Hello Alex!
Right now, SPM from Sematext is free to use so You can try that out :)
Does anyone know of one ? Or has a set of JMX URLs that could be used to
make i.e. munin or cacti use that data ?
I'm currently running psi-probe on each host to have at least some
overview of whats going on within the JVM.
Thanks!
Alex
Does anyone know of one ? Or has a set of JMX URLs that could be used to
make i.e. munin or cacti use that data ?
I'm currently running psi-probe on each host to have at least some
overview of whats going on within the JVM.
Thanks!
Alex
ted Israel a while ago to integrate this into the package, but he
hasn't answered yet.
Cheers,
Alex
--
View this message in context:
http://lucene.472066.n3.nabble.com/ANNOUNCEMENT-PHP-Solr-Extension-1-0-1-Stable-Has-Been-Released-tp3024040p3450788.html
Sent from the Solr - User mailin
his into the package, but he
hasn't answered yet.
Cheers,
Alex
--
View this message in context:
http://lucene.472066.n3.nabble.com/ANNOUNCEMENT-PHP-Solr-Extension-1-0-1-Stable-Has-Been-Released-tp3024040p3450881.html
Sent from the Solr - User mailing list archive at Nabble.com.
On 09/03/2011 08:49 PM, Erick Erickson wrote:
Does hl.fragsize do what you want?
Best
Erick
On Sat, Sep 3, 2011 at 11:56 AM, alex wrote:
hi all,
I would like to truncate some titles(or limit length) while still using
highlighthing if possible , like :
very long title...end of very long
hi all,
I would like to truncate some titles(or limit length) while still using
highlighthing if possible , like :
very long title...end of very long title or
very long title sfgdsdfsg end of very...
Can it be done currently with any highlighter ?
thanks.
The queries I am trying to do are
q=title:Unicamp
and
q=title:Unicamp&bf=question_count^5.0
The boosting factor (5.0) is just to verify if it was really used.
Thanks
Alex
On Wed, Jun 8, 2011 at 10:25 AM, Denis Kuzmenok wrote:
> Show your full request to solr (all params)
>
>
one field and boost in another one)?
Thanks in advance
Alex Grilo
Hi,
Can I make a query that returns only exact match or do I have to change the
fields to achieve that?
Thanks in advance
Alex Grilo
The code is here: http://pastebin.com/50ugqRfA
<http://pastebin.com/50ugqRfA>and my schema.xml configuration entry for
similarity is:
Thanks
Alex
On Mon, May 16, 2011 at 2:01 PM, Gora Mohanty wrote:
> On Mon, May 16, 2011 at 10:04 PM, Alex Grilo wrote:
> > Hi,
> > I&
it.
Thanks in advance
--
Alex Bredariol Grilo
Developer - umamao.com
her way to look at this is PageRank relies on the the number and anchor
text of the incoming link, we're trying to use the number of people and
their keywords/comments as a weight for the link.
Alex
On Fri, Mar 4, 2011 at 6:29 PM, Gora Mohanty wrote:
> On Fri, Mar 4, 2011 at 10:24 AM, A
lanner higher.
I am thinking of implementing org.apache.solr.search.ValueSourceParser which
takes a guid and run a "embedded query" to get a score for this guid in the
bookmark schema. This would probably requires two separated indexes to begin
with.
Keen to hear ideas on what's the best way to implement this and where I
should start.
Thanks,
Alex
Ahmet Arslan wrote:
I can see changes if I change fragsize, but no
hl.snippets.
May be your text is too short to generate more than one snippets?
What happens when you increase hl.maxAnalyzedChars parameter?
&hl.maxAnalyzedChars=2147483647
It's working now. I guess, it was a problem
Ahmet Arslan wrote:
--- On Mon, 2/7/11, alex wrote:
From: alex
Subject: hl.snippets in solr 3.1
To: solr-user@lucene.apache.org
Date: Monday, February 7, 2011, 7:38 PM
hi all,
I'm trying to get result like :
blabla keyword blabla ...
blablakeyword blabla...
so, I'd like
hi all,
I'm trying to get result like :
blabla keyword blabla ... blablakeyword blabla...
so, I'd like to show 2 fragments.I've added these settings
20
3
but I get only 1 fragment blabla keyword blabla.
Am I trying to do it right way? Is it what can be done via changes in
conf
I just moved to a multi core solr instance a few weeks ago, and it's
been working great. I'm trying to add a 3rd core and I can't query
against it though.
I'm running 1.4.1 (and tried 1.4.0) with the spatial search plugin.
This is the section in solr.xml
I've removed the index dir and c
I'm going to go ahead and replay to myself since I solved my problem.
It seems I was doing one more update to the data at the end and wasn't
doing a commit, so it then couldn't write to the other core. Adding the
last commit seems to have fixed everything.
On 2/1/2011 11:08 A
I recently added a second core to my solr setup, and I'm now running
into this "Lock obtain timed out" error when I try to update one core
after I've updated another core.
In my update process, I add/update 1000 documents at a time and commit
in between. Then at the end, I commit and optimize
de,stored_latitude),2),pow(sub(input_longitude,stored_longitude),2))distance filter
What's anyone else out there using?
Thanks in advance,
Alex
Make sure you are not going to "reinvent the wheel" here ;). There's been
done a lot around the problem of distributes search engine.
This thread might be useful for you: http://search-hadoop.com/m/ARlbS1MiTNY
Alex Baranau
Sematext :: http://sematext.com/ :: Solr - Lucene -
jar
lucene-snowball-2.9.3.jar
lucene-spellchecker-2.9.3.jar
...
Hope that helps someone in a similar situation.
Thanks again for the help!
Cheers,
Alex Matviychuk
On Wed, Oct 27, 2010 at 18:01, Alex Matviychuk wrote:
> On Wed, Oct 27, 2010 at 03:57, Chris Hostetter
> wrote:
>> This almo
FieldType class and it looks like it only relies on
solr stuff and lucene.
I don't have much experience with classloader issues, any tips on how
to debug this?
And Ken:
I tried renaming the field as you suggested, but I get the same issue.
Thanks,
Alex Matviychuk
org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:137)
solr schema:
...
...
Any ideas?
Thanks,
Alex Matviychuk
Hi,
Adding Solr user list.
We used similar approach to the one in this patch but with Hadoop Streaming.
Did you determine that indices are really missing? I mean did you find
missing documents in the output indices?
Alex Baranau
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
ke the feature is on demand in the community.
Thank you,
Alex.
On Sun, Aug 22, 2010 at 5:55 PM, MitchK wrote:
>
> Alex,
>
> it sounds like it would make sense.
> Use cases could be i.e. clustering or similar techniques.
> However, in my opinion the point of view for such a modific
s for
them (which not influence the searching process, like
formatting/highlighting, fields to return, etc.). Thus, we could execute
*one* search query and fetch different data for different purposes.
Does this all make sense to you guys?
Thank you,
Alex Baranau
Sematext :: http://sem
t it makes my questions more clear.
Thanks,
Alex
On 2010-07-12 10:26, Chantal Ackermann wrote:
Hi Alex,
I think you have to explain the complete use case. Paging is done by
specifying the parameter "start" (and "rows" if you want to have more or
less than 10 hits per page). F
Hi,
So if those are separate documents how should I handle paging? Two
separate queries?
First to return all matching courses-events pairs, and second one to get
courses for given page?
Is this common design described in details somewhere?
Thanks,
Alex
On 2010-07-09 01:50, Lance Norskog
turns docs 1 & 3.
How would I remove London/Glasgow from doc 1 and Birmingham from doc 3?
Or is it that I should create separate doc for each name-event?
Thanks,
Alex
You are absolutely right. The fields have trailing spaces in it. Thanks Erick
for your time. Really appreciated!
Thanks,
Alex
On May 12, 2010, at 8:29 PM, Erick Erickson wrote:
> Click the "schema browser" link on the admin page.
> On the next page click
> the "fields&
Found the problem! It's because all the values in productType field have
trailing spaces in it like this: "ProductBean ". Thanks Hoss for your
suggestion of using Luke query which exposed the problem.
You guys are awesome!
Thanks,
Alex
On May 12, 2010, at 10:12 PM,
Sorry please discard my query results here, because I was playing with the
field type and changed it to "text" from "string" and forgot to change it back.
I will change it back to "string" and post the query results shortly.
I apologize for the careless mistake.
Thanks Hoss. Please see the query results as follows:
>
> :
> :
> :
> : productType:ProductBean
> :
> :
>
> ...can you please disable the QueryElevationComponent and see if that
> changes things?
>
> : productType:ProductBean
>
> What are the numFound values for these queries? (bot
n in the schema:
Thanks,
Alex
On May 12, 2010, at 11:58 AM, Erick Erickson wrote:
Not til this evening, don't have a handy SOLR implementation to ping...
But another option is to get a copy of Luke and look at the index, but the
same caution
about seeing terms not stored data holds.
Or you
Sorry Erick, can you tell me how to find the raw *indexed* terms from the admin
console? I am not familiar with the admin console.
Thanks,
On May 12, 2010, at 10:18 AM, Erick Erickson wrote:
> Hmmm, nothing looks odd about that, except perhaps the casing. If you use
> the admin
> console to loo
Bean
productType:ProductBean
LuceneQParser
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
Alex Wang
CrossView Inc.
Mobile: 647-4093066
Email: aw...@crossview.com<mailto:aw...@crossview.com>
http://www.crossview.com<http://www.
l%20%22%22>
0.0
-<%5Cl%20%22%22>
0.0
-<%5Cl%20%22%22>
0.0
-<%5Cl%20%22%22>
0.0
-<%5Cl%20%22%22>
0.0
-<%5Cl%20%22%22>
0.0
-<%5Cl%20%22%22>
0.0
-<%5Cl%20%22%22>
0.0
-<%5Cl%20%22%22>
0.0
-<%5Cl%20%22%22>
rches, I get results as expected
(productType:ProductBe*).
I have tried various things like clearing my browser cache, deleting the data
folder and re-index. None of them helps. Can someone please shed some light
here?
Thanks,
Alex
Thanks so much. That works really well now. So this brings up a
complaint I have with the Solr documentation. I see very few actual
examples. If I had seen any example of searching for a multi-word
search, I assume it would have had these parentheses.
-Alex
On 3/18/2010 5:54 PM
_title:rude)=1) 8.844975 = idf(docFreq=7,
maxDocs=20423) 0.5 = fieldNorm(field=artist_title, doc=19218) 0.2631579
= coord(5/19)
Someone else suggested I use DisMax, but I can't really get that to do
what I want right now either. I'm just wondering why this seems to not
be using this field at all.
-Alex
ot;q":"title:akon artist:akon description:akon tags:akon +type:video artist_title:akon featuring_artist:akon
collaborators:akon","qf":"title^100 artist^150 description^5 tags^10 artist_title^500 featuring_artist^20
collaborators^50","json.nl":"map","qt":"dismax","wt":"json","version":"1.2","rows":"30"}},"response":{"numFound":0,"start":0,"docs":[]}}
If I remove the qt=dismax, I get results like I should. Can anyone shed
some light?
Thanks,
Alex
Aha. That appears to be the issue. I hadn't realized that the query
handler had all of those definitions there.
-Alex
On 3/16/2010 6:56 PM, Erick Erickson wrote:
I suspect your problem is that you still have "price" defined in
solrconfig.xml for the dismax handle
needs to be cleared? Query type=standard works fine here.
Thanks,
Alex
need to do for that?
--
Alex Thurlow
Blastro Networks
http://www.blastro.com
http://www.roxwel.com
http://www.yallwire.com
tokenizer & filter list
Is there a way to have a string field that's case-insensitive?
Alex Thurlow
Blastro Networks
http://www.blastro.com
http://www.roxwel.com
http://www.yallwire.com
On 3/10/2010 12:11 PM, Alex Thurlow wrote:
Hi all,
I've searched the archives and web, but
ve completely deleted my index between changes and
reinserted all my data.
What am I missing here?
--
Alex Thurlow
Blastro Networks
http://www.blastro.com
http://www.roxwel.com
http://www.yallwire.com
That's great information. Thanks!
-Alex
Alex Thurlow
Blastro Networks
http://www.blastro.com
http://www.roxwel.com
http://www.yallwire.com
On 3/2/2010 3:11 PM, Ahmet Arslan wrote:
I'm new to Solr and just getting it set up
and testing it out. I'd like to know i
t within the
description. Is this possible either in the indexing or with a query
option?
Thanks,
Alex
--
Alex Thurlow
Blastro Networks
http://www.blastro.com
http://www.roxwel.com
http://www.yallwire.com
ing.
>
> On Jan 1, 2010, at 2:11 PM, Alex Muir wrote:
>
>> Hi,
>>
>> I'm about to start using Ant to get Carrot2 working with solr however
>> I was first trying to get it working without Ant by placing jars into
>> a lib directory in the quickstart example di
est how to accomplish this I would be happy to hear about it.
Happy New Year!
Regards
--
Alex
https://sites.google.com/a/utg.edu.gm/alex
Thanks Otis for the reply. Yes this will be pretty memory intensive.
The size of the index is 5 cores with a maximum of 500K documents each
core. I did search the archives before but did not find any definite
answer. Thanks again!
Alex
On Nov 27, 2009, at 11:09 PM, Otis Gospodnetic wrote
there a limit on the number of fields allowed per document?
2. What is the performance impact for such design?
3. Has anyone done this before and is it a wise thing to do?
Thanks,
Alex
hossman wrote:
>
> If you just want the full input string passed to the analyzer of each qf
> field, then you just need to quote the entire string (or escape every
> shitespace charter in the string with a backslash) so that the entire
> input is considered one chunk -- but then you don't ge
how to search against two
fields with dismax when they are tokenized differently (and one of them is
not tokenized by whitespaces)
Could you please help with that situation?
Thank you in advance,
Alex.
--
View this message in context:
http://www.nabble.com/Dismax%3A-Impossible-to-search-for-a-_p
arameter.
>
It's not good for me unfortunately, but thanks for the suggestion.
Alex Baranov.
On Sat, Oct 10, 2009 at 3:01 PM, Yonik Seeley wrote:
> On Sat, Oct 10, 2009 at 6:34 AM, Alex Baranov
> wrote:
> >
> > Hello,
> >
> > It seems to me that there is no w
Hello,
It seems to me that there is no way how I can use dismax handler for
searching in both tokenized and untokenized fields while I'm searching for a
phrase.
Consider the next example. I have two fields in index: product_name and
product_name_un. The schema looks like:
Please, take a look at
http://issues.apache.org/jira/browse/SOLR-1379
Alex.
On Wed, Sep 9, 2009 at 5:28 PM, Constantijn Visinescu wrote:
> Just wondering, is there an easy way to load the whole index into ram?
>
> On Wed, Sep 9, 2009 at 4:22 PM, Alex Baranov >wrote:
>
>
1 - 100 of 136 matches
Mail list logo