Hi,
on solr 4.7 I've ran into a strange issue. Whilst setting up a field I've
noticed in the analysis form when I use a char filter factory (for example
HTMLSCF) with a tokeniser (ST) the analysis chain grinds to a halt. the
char filter does not seem to pass anything into the tokeniser.
Field typ
B*ll*cks, before posting I spent an hour searching for issues, honest.
Soon as I post within seconds I find
https://issues.apache.org/jira/browse/SOLR-5800
On 20 October 2015 at 15:21, Lee Carroll
wrote:
> Hi,
>
> on solr 4.7 I've ran into a strange issue. Whilst setting u
No Alexandre its just Sod's law (http://www.thefreedictionary.com/Sod's+Law)
:-)
Lee C
On 20 October 2015 at 15:38, Alexandre Rafalovitch
wrote:
> On 20 October 2015 at 10:26, Lee Carroll
> wrote:
> > B*ll*cks, before posting I spent an hour searching for issues, hone
Hi
running bin/solr start does not start up in cloud mode despite having
ZK_HOST set in /etc/default/solr.in.sh.
running openjdk 1.8
solr 6.5.1 on aws linux
zookeeper 3.4.6 on aws linux (3 node ensemble)
logs look clean both in zookeeper and solr
running bin/solr zk ls / returns
Connecting to
-z is specified BTW. The
> -c will start an _internal_ zookeeper in the absence of a -z
> parameter.
>
> Best,
> Erick
>
> On Sat, Jun 3, 2017 at 8:09 AM, Lee Carroll
> wrote:
> > Hi
> > running bin/solr start does not start up in cloud mode despite havin
The instructions at
https://cwiki.apache.org/confluence/display/solr/UIMA+Integration to set up
UIMA integration with solr requires an alchemy api key. This is no longer
available as its part of the ibm watson offering.
What is the status of https://wiki.apache.org/solr/SolrUIMA ?
Would I be bett
I think he means a doc for each element. so you have a disease occurrence
index
1
1
exist
1-1
assuming (and its a pretty fair assumption?) most groups have only a subset
of diseases this will be a sparse matrix so just don't index
the occurrence value "does not exist"
basically denormalize via
Hi All,
In solr 4.7 this query
/solr/coreName/select/?q=*:*&fl=%27nasty%20value%27&rows=1&wt=json
returns
{"responseHeader":{"status":0,"QTime":2},"response":{"numFound":189796,"start":0,"docs":[{"'nasty
value'":"nasty value"}]}}
This is naughty. Has this been seen before / fixed ?
een broken.
>
> -Yonik
> http://heliosearch.org - native code faceting, facet functions,
> sub-facets, off-heap data
>
>
> On Wed, Nov 26, 2014 at 9:56 AM, Lee Carroll
> wrote:
> > Hi All,
> > In solr 4.7 this query
> > /solr/coreName/sele
generate pseudo-fields if needed only on the server and do not allow
clients to generate them.
Just out of interest, what is the use-case for a pseudo-field whose value
is a repeat of the field name?
On 26 November 2014 at 15:55, Yonik Seeley wrote:
> On Wed, Nov 26, 2014 at 10:47 AM,
Hi all
Creating a new collection fails with class not found:
org.apache.solr.handler.component.SearchHandler
Running under tomcat 7.0.59 with solr 4.10.3. Solr app looks to be deployed
ok and the web app looks fine when browsing.
An external zookeeper set up looks fine and the configs are loaded
Hi
it was jars copied into a solr-zk-cli directory to allow easy running of
solr zk cmd line client. well i think that is what fixed tomcat! I've also
tried with jetty with a clean solr home and that also works and seems a
much cleaner way of running multiple instances (probably more to do with
ru
Hi
I've 2 tables with the following data
table 1
id treatment_list
1 a,b
2 b,c
table 2
treatment id, name
a name1
b name 2
c name 3
Using DIH can you create an index of the form
id-treatment-id name
1a name1
1b
Hi It looks like when a DIH entity has a delta and delta import query plus
a transformer defined the execution of both query's call the transformer. I
was expecting it to only be called on the import query. Sure we can check
for a null value or something and just return the row during the delta
qu
Not sure if this has progressed further but I'm getting test failure
for 3.3 also.
Trunk builds and tests fine but 3.3 fails the test below
(Note i've a new box so could be a silly set up issue i've missed but
i think everything is in place (latest version of java 1.6, latest
version of ant)
Hi Chris,
That makes sense. I was behind fire wall when running both builds. I
thought I was correctly proxied - but maybe the request was being
squashed
by something else before it even got to the firewall.
I've just ran tests again but this time outside of fire wall and all pass.
Thanks a lot
if you have a limited set of searches which need to use this and they
act on a limited known set of fields you can concat fields at index
time and then facet
PK FLD1 FLD2FLD3 FLD4 FLD5 copy45
AB0 AB 0 x yx y
AB1 AB 1 x
see
http://lucene.apache.org/java/2_4_0/api/org/apache/lucene/search/Similarity.html
On 27 September 2011 16:04, Mark wrote:
> I thought that a similarity class will only affect the scoring of a single
> field.. not across multiple fields? Can anyone else chime in with some
> input? Thanks.
>
lib directory on 1.4.1 with multi cores
I've specified shared lib as "lib" in the solr.xml file. My assumption
being this will be the lib under solr-home.
However my cores cannot load classes from any new jar's placed in this
dir after a tomcat restart.
What am I missing ?
I've prototyped a solution which makes use of multiple doc types.
Does the following have any value in terms of field value storage or
are field values saved once and pointers from other records maintained
making the below
design redundant?
we have CITY (500) and each city has many HOTEL (75000).
current: bool //for fq which searches only current versions
last_current_at: date time // for date range queries or group sorting
what was current for a given date
sorry if i've missed a requirement
lee c
On 13 October 2011 15:01, Mike Sokolov wrote:
> We have the identical problem in our syste
sorry missed the permission stuff:
I think thats ok if you index the acl as part of the document. That is
to say each version has its own acl. Match users against version acl
data
as a filter query and use last_current_at date as a sort
On 13 October 2011 22:04, lee carroll wrote:
> curr
Hi Chris thanks for the response
> It's an inverted index, so *tems* exist once (per segment) and those terms
> "point" to the documents -- so having the same terms (in the same fields)
> for multiple types of documents in one index is going to take up less
> overall space then having distinct col
October 2011 11:54, lee carroll wrote:
> Hi Chris thanks for the response
>
>> It's an inverted index, so *tems* exist once (per segment) and those terms
>> "point" to the documents -- so having the same terms (in the same fields)
>> for multiple types of doc
this link is on he mailing list recently.
http://www.lucidimagination.com/search/document/dfa18d52e7e8197c/getting_answers_starting_with_a_requested_string_first#b18e9f922c1e4149
On 18 October 2011 00:59, aronitin wrote:
> Guys,
>
> It's been almost a week but there are no replies to the questi
;ll be saving a bit
> on file transfers when replicating your index, but not much else.
>
> Is it worth it? If so, why?
>
> Best
> Erick
>
> On Mon, Oct 17, 2011 at 11:07 AM, lee carroll
> wrote:
>> Just as a follow up
>>
>> it looks like stored fields are store
Take a look at facet query. You can facet on a query results not just
terms in a field
http://wiki.apache.org/solr/SimpleFacetParameters#facet.query_:_Arbitrary_Query_Faceting
On 25 October 2011 10:56, Erik Hatcher wrote:
> I'm not following exactly what you're looking for here, but sounds lik
do your docs have daily availability ?
if so you could index each doc for each day (rather than have some
logic embedded in your data)
so instead of doc1 (1/9/2011 - 5/9/2011)
you have
doc1 1/9/2011
doc1 2/9/2011
doc1 3/9/2011
doc1 4/9/2011
doc1 5/9/2011
this makes search much easier and flexible
only one field can be a default. use copy field and copy the fields
you need to search into a single field and set the copy field to be
the default. That might be ok depending upon your circumstances
On 25 November 2011 12:46, kiran.bodigam wrote:
> In my schema i have defined below tag for index
You could use a synonyms file for the alternative names. That way you
do not need to store only index the alternatives.
For faceting use a field were the analysis chain does not use the
synonyms filter. For search the analysis chain will.
You also get the benefit of only storing the normative value
if "type" is a field use field faceting with an fq
q=datefield:[start TO end]&fq=type:(a b c)&facet.field=type
On 14 January 2012 17:56, Jamie Johnson wrote:
> I'm trying to figure out a way to execute a query which would allow me
> to say there were x documents over this period of time with
> Does
> that make more sense?
Ah I see.
I'm not certain but take a look at pivot faceting
https://issues.apache.org/jira/browse/SOLR-792
cheers lee c
check your defaultOperator, ensure its OR
On 23 January 2012 05:56, jawedshamshedi wrote:
> Hi
> Thanks for the reply..
> I am using NGramFilterFactory for this. But it's not working as desired.
> Like I have a field article_type that has been indexed using the below
> mentioned field type.
>
>
on selection issue another query to get your additional data (if i
follow what you want)
On 22 January 2012 18:53, Dave wrote:
> I take it from the overwhelming silence on the list that what I've asked is
> not possible? It seems like the suggester component is not well supported
> or understood,
"content-based recommender" so its not CF etc
and its a project so its whatever his supervisor wants.
take a look at solrj should be more natural to integrate your java code with.
(Although not sure if it supports termv ector comp)
good luck
On 26 January 2012 17:27, Walter Underwood wrote:
I use *analyzer type*="*query*" can you use search ?
On 17 December 2012 11:01, Dirk Högemann wrote:
> {!q.op=AND df=cl2Categories_NACE}08
> Gewinnung von Steinen und Erden, sonstiger Bergbau name="parsed_filter_queries">+cl2Categories_NACE:08
> +cl2Categories_NACE:gewinnung +cl2Categories_NAC
Hi
We are doing a lat/lon look up query using ip address.
We have a 6.5 million document core of the following structure
start ip block
end ip block
location id
location_lat_lon
the field defs are
the query at the moment is simply a range query
q=startIpNum:[* TO 180891652]A
o any parts of the query repeat a lot? Maybe there is room for fq.
>
> Otis
> Solr & ElasticSearch Support
> http://sematext.com/
> On Jan 9, 2013 6:08 AM, "Lee Carroll"
> wrote:
>
> > Hi
> >
> > We are doing a lat/lon look up query using i
Does the stats component cache. If not what are the alternatives for
finding Max / Min values of fields for a particular result set.
We think we are running into performance issues with the stats
component (250ms for a query when we issue a query with the stats
component on)
Cheers
Hi You have a lot of language processing for a field which contains,
at least in your example non words.
Do you need the synonyms, two lots of stemming, etc
what is the field for?
>>" I don't believe that this last point is what actually causes
>> my unsatisfactory results"
it probably is
an it better and create field types that are more
> specific for different field contents, correct?
>
> But still, that does not explain why I have indexed this specific value
> "EHT2011-2012" and the very same value does not match anything when I
> search for it.
>
Have you looked at external fields?
http://lucidworks.lucidimagination.com/display/solr/Solr+Field+Types#SolrFieldTypes-WorkingwithExternalFiles
you will need a process to do the counts and note the limitation of
updates only after a commit, but i think it would fit your usecase.
On 23 Febru
Your example are not synonyms so i don't think synonyms.txt by itself
is going to work.
This sounds like tagging using a taxonomy. Values written to the field
storing this taxonomy could be like:
livingthing/animal/cat [doc about cats]
livingthing/animal/dog [doc about dogs]
livingthing/animal [do
Vazquez,
Sorry I don't have an answer but I'd love to know what you need this for :-)
I think the logic is going to have to bleed into your search app. In
short copy field and your app knows which to search in.
lee c
On 30 April 2012 20:41, Erick Erickson wrote:
> OK, I took another look at w
Take a look at the clustering component
http://wiki.apache.org/solr/ClusteringComponent
Consider clustering off line and indexing the pre calculated group memberships
I might be wrong but I don't think their is any faceting mileage here.
Depending upon the use case
you might get some use out of
I'm not sure about your approach, turning off most of the features
which produce a similarity measure in a vsm and then wanting to sort
by a similarity could lead to pain. (I don't know your usecase so this
could still be valid)
One approach to, (well what I think your usecase might be...) is to
u
what is your db schema ? do you need to import all the schema ? (128
joined tables ??)
or are the tables all independant ? (if so dump them out and import
them in using csv)
cheers lee c
On 7 June 2012 02:32, Jihyun Suh wrote:
> Each table has 35,000 rows. (35 thousands).
> I will check the log
If you go down the keep-word route you can return the "tags" to the
front end app using a facet field query. This often fits with many
use-cases for doc tags.
lee c
On 23 June 2012 22:37, Jack Krupansky wrote:
> One important footnote: the "keep words/synonym analyzer" approach will
> index the
Hi I'm pretty new to SOLR and interested in getting an idea about a simple
standard way of setting up a production SOLR service. I have read the FAQs
and the wiki around SOLR security and performance but have not found much on
a best practice architecture. I'm particularly interested in best practi
Hi,
Can a URL based datasource in DIH return non xml. My pages being indexed are
writen by many authors and will
often be invalid xhtml. Can DIH cope with htis or will i need another
approach ?
thanks in advance Lee C
; HTML parser, e.g.
> http://sourceforge.net/projects/nekohtml/
>
> <http://sourceforge.net/projects/nekohtml/>Best
> Erick
>
> On Sun, Nov 21, 2010 at 7:46 PM, lee carroll
> wrote:
>
> > Hi,
> >
> > Can a URL based datasource in DIH return non xml. My pag
Hi We are investigating / looking to deploy solr on to a weblogic cluster of
4 servers.
The cluster has no concept of a master / slave configuration so we are
thinking of the following solutions (new to solr so some or all may be bad
ideas:-)
1) all 4 servers run their own index. we re-index each
2010 11:53, lee carroll wrote:
> Hi We are investigating / looking to deploy solr on to a weblogic cluster
> of 4 servers.
> The cluster has no concept of a master / slave configuration so we are
> thinking of the following solutions (new to solr so some or all may be bad
> ideas:
Hi
I've built a schema for a proof of concept and it is all working fairly
fine, niave maybe but fine.
However I think we might run into trouble in the future if we ever use
facets.
The data models train destination city routes from a origin city:
Doc:City
Name: cityname [uniq key]
CityTy
dard:[price1 TO
> price2]&facet.query:fareJanStandard[price2 TO price3]
> You can string as many facet.query clauses as you want, across as many
> fields as you want, they're all
> independent and will get their own sections in the response.
>
> Best
> Erick
>
> On Wed, Dec 1,
o put in arbitrary
> ranges
> this scheme probably wouldn't work...
>
> Best
> Erick
>
> On Wed, Dec 1, 2010 at 10:22 AM, lee carroll
> wrote:
>
> > Hi Erick,
> > so if i understand you we could do something like:
> >
> > if Jan is select
Sorry Geert missed of the price value bit from the user interface so we'd
display
Facet price
Standard fares [10]
First fares [3]
When traveling
in Jan [9]
in feb [10]
in march [1]
Fare Price
0 - 25 : [20]
25 - 50: [10]
50 - 100 [2]
cheers lee c
On 1 December 2010 17:00, lee carroll
.DCFeb
> in march [1]-> FaresPerDate.DCMarch
>
> 2) the user has selected January
> q=*:*&facet.field:FaresPerDate&fq=FaresPerDate:DCJan&facet.query=_pDCJan:[0
> TO 20]&facet.query=_pDCJan:[20 TO 40]
>
> Standard fares [10] --> FaresPerDate.standardJan
> Fir
Hi List,
Coming to and end of a proto type evaluation of SOLR (all very good etc etc)
Getting to the point at looking at bells and whistles. Does SOLR have a
thesuarus. Cant find any refrerence
to one in the docs and on the wiki etc. (Apart from a few mail threads which
describe the synonym.txt as
time.
>
> Maybe have a look at http://poolparty.punkt.at/ a full features SKOS
> thesaurus management server.
> It's also providing webservices which could feed such a Solr filter.
>
> Kind regards
> Michael
>
>
> - Ursprüngliche Mail -
> Von: "lee carroll&q
Hi Can the following usecase be achieved.
value to be analysed at index time "this is a pretty line of text"
synonym list is pretty => scenic , text => words
valued placed in the index is "scenic words"
That is to say only the matching synonyms. Basically i want to produce a
normalised set of p
nymFilterFactory
>
> with the => syntax, I think that's what you're looking for
>
> Best
> Erick
>
> On Mon, Dec 6, 2010 at 6:34 PM, lee carroll wrote:
>
>> Hi Can the following usecase be achieved.
>>
>> value to be analysed at index time "
Hi Lee,
>
>
> On Mon, Dec 6, 2010 at 10:56 PM, lee carroll
> wrote:
>> Hi Erik
>
> Nope, Erik is the other one. :-)
>
>> thanks for the reply. I only want the synonyms to be in the index
>> how can I achieve that ? Sorry probably missing something obvious
rolling the whole process anyway
>
> Best
> Erick
>
> On Tue, Dec 7, 2010 at 6:07 AM, lee carroll >wrote:
>
> > Hi tom
> >
> > This seems to place in the index
> > This is a scenic line of words
> > I just want scenic and words in the index
&
words that match a list, use the
> KeepWordFilterFactory, with your list of synonyms.
>
>
http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.KeepWordFilterFactory
>
> I'd put the synonym filter first in your configuration for the field,
> then the keep words filter factory.
>
&
Hi Alessandro,
Can you use a javascript library which handles ajax and json / jsonp
You will end up with much cleaner client code for example a jquery
implementation looks quite nice using solrs neat jsonp support:
queryString = "*:*"
$.getJSON(
"http://[server]:[port]/solr/select/?js
Hi Chris,
Its all a bit early in the morning for this mined :-)
The question asked, in good faith, was does solr support or extend to
implementing a thesaurus. It looks like it does not which is fine. It does
support synonyms and synonym rings which is again fine. The ski example was
an illustrat
s Lee c
On 10 December 2010 09:38, Peter Sturge wrote:
> Hi Lee,
>
> Perhaps Solr's clustering component might be helpful for your use case?
> http://wiki.apache.org/solr/ClusteringComponent
>
>
>
>
> On Fri, Dec 10, 2010 at 9:17 AM, lee carroll
> wrote:
>
od applied, because e.g. boarder terms would cause lots of
> misleading results.
>
> Péter
>
> 2010/12/10 lee carroll :
> > Hi Peter,
> >
> > Thats way to clever for me :-)
> > Discovering thesuarus relationships would be fantastic but its not clear
> >
During data import can you update a record with min and max fields, these
would be equal in the case of a single non range value.
I know this is not a solr solution but a data pre-processing one but would
work?
Failing the above i've saw in the docs reference to a compound value field
(in the con
I think this could be down to the same server rule applied to ajax requests.
Your not allowed to display content from two different servers :-(
the good news solr supports jsonp which is a neat trick around this try this
(pasted from another thread)
queryString = "*:*"
$.getJSON(
"ht
Hi ramzesua,
Synonym lists will often be application specific and will of course be
language specific. Given this I don't think you can talk about a generic
solr synonym list, just won't be very helpful in lots of cases.
What are you hoping to achieve with your synonyms for your app?
On 23 Dec
Hi Satya,
This is not a solr issue. In your client which makes the json request you
need to have some error checking so you catch the error.
Occasionally people have apache set up to return a 200 ok http response with
a custom page on http errors (often for spurious security considerations)
but t
Sorry not an answer but a +1 vote for finding out best practice for this.
Related to it is DOS attacks. We have rewrite rules in between the proxy
server and solr which attempts to filter out undesriable stuff but would it
be better to have a query app doing this?
any standard rewrite rules whic
Hi
I'm indexing a set of documents which have a conversational writing style.
In particular the authors are very fond
of listing facts in a variety of ways (this is to keep a human reader
interested) but its causing my index trouble.
For example instead of listing facts like: the house is white,
e matches in the synonyms file but is that
the best approach or could nlp help here?
cheers lee
On 10 January 2011 18:21, Grant Ingersoll wrote:
>
> On Jan 10, 2011, at 12:42 PM, lee carroll wrote:
>
> > Hi
> >
> > I'm indexing a set of documents which have a conv
break"
>
> and bang london is top. talk about a relevancy problem :-)
>
> now i was thinking of using phrase matches in the synonyms file but is that
> the best approach or could nlp help here?
>
> cheers lee
>
>
>
>
>
> On 10 January 2011 18:21, Grant Ingers
use dismax q for first three fields and a filter query for the 4th and 5th
fields
so
q="keyword1 keyword 2"
qf = field1,feild2,field3
pf = field1,feild2,field3
mm=something sensible for you
defType=dismax
fq=" field4:(keyword3 OR keyword4) AND field5:(keyword5)"
take a look at the dismax docs for
the default operation can be set in your config to be "or" or on the query
something like q.op=OR
On 27 January 2011 11:26, Isan Fulia wrote:
> but q="keyword1 keyword2" does AND operation not OR
>
> On 27 January 2011 16:22, lee carroll
> wrote:
>
>
sorry ignore that - we are on dismax here - look at mm param in the docs
you can set this to achieve what you need
On 27 January 2011 11:34, lee carroll wrote:
> the default operation can be set in your config to be "or" or on the query
> something like q.op=OR
>
>
>
&
rd4)) OR
> field2:((keyword1 AND keyword2) OR (keyword3 AND keyword4)) OR
> field3:((keyword1 AND keyword2) OR (keyword3 AND keyword4))
>
>
>
>
> On 27 January 2011 17:06, lee carroll
> wrote:
>
> > sorry ignore that - we are on dismax here - look at mm param in the
Hi list,
It looks like you can use a jndi datsource in the data import handler.
however i can't find any syntax on this.
Where is the best place to look for this ? (and confirm if jndi does work in
dataimporthandler)
ah should this work or am i doing something obvious wrong
in config
in dataimport config
what am i doing wrong ?
On 5 February 2011 10:16, lee carroll wrote:
> Hi list,
>
> It looks like you can use a jndi datsource in the data import handler.
> however i can't find a
Hi List
I'm trying to achieve the following
text in "this aisle contains preserves and savoury spreads"
desired index entry for a field to be used for faceting (ie strict set of
normalised terms)
is "jams" "savoury spreads" ie two facet terms
current set up for the field is
Just to add things are going not as expected before the keepword, the
synonym list is not be expanded for shingles I think I don't understand term
position
On 5 February 2011 16:08, lee carroll wrote:
> Hi List
> I'm trying to achieve the following
>
> text in "th
Hi Bill,
quoting in the synonyms file did not produce the correct expansion :-(
Looking at Chris's comments now
cheers
lee
On 5 February 2011 23:38, Bill Bell wrote:
> OK that makes sense.
>
> If you double quote the synonyms file will that help for white space?
>
> Bill
>
>
> On 2/5/11 4:37
ntm filter ?
example synonym line which is problematic
termA1,termA2,termA3, phrase termA, termA4 => normalisedTermA
termB1,termB2,termB3 => normalisedTermB
when the synonym filter uses the keyword tokeniser
only "phrase term A" ends up being matched as a synonym :-)
lee
On
Hi Still no luck with this is the problem with
the name attribute of the datasource element in the data config ?
On 5 February 2011 10:48, lee carroll wrote:
> ah should this work or am i doing something obvious wrong
>
> in config
>
>jndiName="java:sourcepa
Hi a MLT query with a q parameter which returns multiple matches such as
q=id:45 id:34
id:54&mlt.fl=filed1&mlt.mindf=1&mlt.mintf=1&mlt=true&fl=id,name
seems to return the results of three seperate mlt queries ie
q=id:45 &mlt.fl=filed1&mlt.mindf=1&mlt.mintf=1&mlt=true&fl=id,name
+
q=id:34 &mlt.fl
Hi Marc,
I don't want to sound to prissy and also assume to much about your
application but a generic synonym file could do more harm than good. Lots of
applications have specific vocabularies and a specific synonym list is what
is needed. Remember synonyms increase recall but reduce precision. The
Hi Mark,
I think you would need to issue two seperate queries. Its also a, I was
going to say odd
usecase but who am I to judge, interesting usecase. If you have a faceted
navigation front
end you are in real danger of confusing your users. I suppose its a case of
what do you want to achieve? Facet
Tanguy
You might have tried this already but can you set overwritedupes to
false and set the signiture key to be the id. That way solr
will manage updates?
from the wiki
http://wiki.apache.org/solr/Deduplication
HTH
Lee
On 30 May 2011 08:32, Tanguy Moal wrote:
>
> Hello,
>
> Sorry for re-
I don't think you can assign a synonyms file dynamically to a field.
you would need to create multiple fields for each lang / cat phrases
and have their own synonyms file referenced for each field. that would
be a lot of fields.
On 1 June 2011 09:59, Spyros Kapnissis wrote:
> Hello to all,
Deniz,
it looks like you are missing an index anlayzer ? or have you removed
that for brevity ?
lee c
On 2 June 2011 10:41, Gora Mohanty wrote:
> On Thu, Jun 2, 2011 at 11:58 AM, deniz wrote:
>> Hi all,
>>
>> here is a piece from my solfconfig:
> [...]
>> but somehow synonyms are not read... I
oh and its a string field change this to be text if you need analysis
class="solr.StrField"
lee c
On 2 June 2011 11:45, lee carroll wrote:
> Deniz,
>
> it looks like you are missing an index anlayzer ? or have you removed
> that for brevity ?
>
> lee c
>
>
Juan
I don't think so.
you can try indexing fields like myfield_en. myfield_fr, my field_xx
if you now what language you are dealing with at index and query time.
you can also have seperate cores for your documents for each language
if you don't want to complicate your schema
again you will need
this is from another post and could help
Can you use a javascript library which handles ajax and json / jsonp
You will end up with much cleaner client code for example a jquery
implementation looks quite nice using solrs neat jsonp support:
queryString = "*:*"
$.getJSON(
"http://[serv
use solrs jasonp format
On 2 June 2011 08:54, Romi wrote:
> sorry for the inconvenience, please look at this file
> http://lucene.472066.n3.nabble.com/file/n3014224/JsonJquery.text
> JsonJquery.text
>
>
>
> -
> Thanks & Regards
> Romi
> --
> View this message in context:
> http://lucene.47
just to re-iterate jasonp gets round ajax same server policy
2011/6/2 François Schiettecatte :
> This is not really an issue with SOLR per se, and I have run into this
> before, you will need to read up on 'Access-Control-Allow-Origin' which needs
> to be set in the http headers that your ajax
did you include the jquery lib,
make sure you use the jsasoncallback
ie
$.getJSON(
"http://[server]:[port]/solr/select/?jsoncallback=?";,
{"q": queryString,
"version": "2.2",
"start": "0",
"rows": "10",
"indent":
1 - 100 of 176 matches
Mail list logo