You didn't happen to notice that you have one field names RestaurantLocation
and
another named RestaurantName, did you?
You must be submitting 'RestaurantName', and it's being applied to a geo field.
Dennis Gearon
Signature Warning
It is always a good idea to learn from your
A possible shortcut?
Write a regex that will parse out the fields as you want them, put that into
some shell script that calls solr?
Dennis Gearon
Signature Warning
It is always a good idea to learn from your own mistakes. It is usually a
better
idea to learn from others’ m
On Wed, Jan 12, 2011 at 1:40 AM, alexei wrote:
[...]
> The datasource number is stored in the database.
> The parent entity queries for this number and in theory it
> should becomes available to the child entity - "Article" in my case.
I do not think that it is possible to have the datasource nam
what are your Benchmarks ,
Please describe in detail your problem,
What is exactly being .
What is the way you are indexing and quering
what data you indexing,
how much data you indexing.
What are your server configurations.
How much Ram you are using
-
Grijesh
--
View this message in c
Hi,
I took the latest build from the hudson and installed on my computer. I
have done the following changes in my schema.xml
When i run the query like this:
HTTP ERROR 500
Problem accessing /solr/select. Reason:
The field restaurantName does not support spatial filter
First thing is that your raw log files solr can not understand. Solr needs
data according to schema defined And also solr does not know your log file
format .
So you have to write a parser program that will parse your log files into a
existing solr writable formats .Then you can be able to index
if i convert it to CSV or XML then it will be time consuming cause the
indexing and getting data out of it should be real time.. is there any way i
can do other than this.. if not what are the ways i can convert them to CSV
and XML.. and lastly which is the doc folder of solr
--
View this message
It will not work.
I think your log files are not in solr Doc xml files.
First thing is that your log files is raw data.
you have to convert it to any of solr readable data either in solr xml DOC
or CSV format to index on solr As Gora suggested to you.
-
Grijesh
--
View this message in conte
i copied it to the same exampledocs folder and did
#java -jar post.jar log.txt
and i got
SimplePostTool: version 1.2
SimplePostTool: WARNING: Make sure your XML documents are encoded in UTF-8,
other encodings are not currently supported
SimplePostTool: POSTing files to http://localhost:8983/solr
How you parsed you log?
Which way you gone for index of log file data?
have you done any work what Gora Mohanty has suggested to you.
I am local for Delhi NCR area.
-
Grijesh
--
View this message in context:
http://lucene.472066.n3.nabble.com/Input-raw-log-file-tp2210043p2239505.html
Sent
I have installed and tested the sample xml file and tried indexing..
everything went successful and when i tried with log files i got an error..
i tried reading the schema.xml and didn't get a clear idea.. can you please
help..
--
View this message in context:
http://lucene.472066.n3.nabble.co
what means query time index issue? Please provide more detail of your
problem.
the field type textSpell is defined in example schema.xml for Spell
Suggestion.What analysis chain you have used in your "textSpell" field.
for what purpose you are using that field ?
-
Grijesh
--
View this mess
On Wednesday 12 January 2011 10:56 AM, Grijesh.singh wrote:
Which type of performance issues you have index time or query time?
-
Grijesh
i have query time index issues.Also tell me in which condition
field_type' textspell' is used. Is it effect the performance of solr query.
Which type of performance issues you have index time or query time?
-
Grijesh
--
View this message in context:
http://lucene.472066.n3.nabble.com/solr-search-performance-tp2239298p2239338.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi Stefan,
Ya it works :). Thanks...
But i have a question... can it be done only getting spell
suggestions even if the spelled word is correct... I mean near words to
it...
ex:-
http://localhost:8080/solr/spellcheckCompRH?q=java&rows=0&spellcheck=true&spellcheck
Hi
Plz tell me changes that made in solr config file to improve the
solr search performance.
Thanks!
Check their configurations they are using different Analysis .i.e their
definition is different
-
Grijesh
--
View this message in context:
http://lucene.472066.n3.nabble.com/field-Type-Textsplell-tp2239237p2239275.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi,
Plz explain me the difference between field_type 'text' and
'textspell' in solr schema.
Thanks!
Isha
I am using Solr4.0 for my testing right now if that helps.
Adam
On Jan 11, 2011, at 10:46 PM, Adam Estrada
wrote:
> All,
>
> I have the following query which works just fine for querying a date range.
> Now I would like to add any kind of spatial query to the mix. Would someone
> be so kind
All,
I have the following query which works just fine for querying a date range.
Now I would like to add any kind of spatial query to the mix. Would someone
be so kind as to help me out with an example spatial query that works in
conjunction with my date range query?
http://localhost:8983/solr/se
When you hit corruption is it always this same problem?:
java.lang.RuntimeException: term source:margolisphil docFreq=1 !=
num docs seen 0 + num docs deleted 0
Can you run with Lucene's IndexWriter infoStream turned on, and catch
the output leading to the corruption? If something is somehow me
By placing some strategic debug messages, I have found that the JDBC
connections are not being closed until all elements have been
processed (in the entire config file). A simplified example would be:
... field list ...
... field list ...
mrw wrote:
>
>
> We're actually using the default facet.limit value of 100. I will
> increase it to 200 and see if the non-zero-count facets show up. Maybe
> that was causing my confusion.
>
Yep -- the 0-count facets were not being returned due to the facet.limit
cutoff.
So, unless the
iorixxx wrote:
>
>
> After re-reading, it is not normal that none of the 0-count facets are
> showing up. Can you give us full parameter list that you obtain this
> by adding &echoParams=all to your search URL?
>
> May be you limit facets to three in your first query? What happens when
> you a
good point! that's an enhancement we would definitely welcome as well.
currently, we too have to remote desktop to the Sol machine and search
through the logs..
Any thoughts?
Cheers,
-- Savvas
On 11 January 2011 19:59, roz dev wrote:
> Hi All
>
> We are using SolrJ client (v 1.4.1) to integra
> >> Notice how, before the fq clause is added, none of
> the
> >> 0-count facets are
> >> returned, even though facet.mincount = 0, but
> afterward, a
> >> bunch of 0-count
> >> facets are returned?
> >>
> > This is normal.
>
> What's behind that? Is it widening the results before
> the mincount
Hi Gora,
Thank you for your reply.
The datasource number is stored in the database.
The parent entity queries for this number and in theory it
should becomes available to the child entity - "Article" in my case.
I am initiating the import via solr/db/dataimport?command=full-import
Script is a
Hi All
We are using SolrJ client (v 1.4.1) to integrate with our solr search
server.
We notice that whenever SolrJ request does not match with Solr schema, we
get Bad Request exception which makes sense.
org.apache.solr.common.SolrException: Bad Request
But, SolrJ Client does not provide any clu
>> Notice how, before the fq clause is added, none of the
>> 0-count facets are
>> returned, even though facet.mincount = 0, but afterward, a
>> bunch of 0-count
>> facets are returned?
>>
> This is normal.
What's behind that? Is it widening the results before the mincount
constraint is being a
> I've noticed that performing a query with facet.mincount=0
> and no fq clauses
> results in a response where only facets with non-zero
> counts are returned,
> but adding in an fq clause (caused by a user selecting a
> non-zero-valued
> facet value checkbox) actually causes a bunch of 0-count
> f
On Tue, Jan 11, 2011 at 11:10 PM, alexei wrote:
>
> Hi,
>
> I am in a situation where the data needed for one of the fields in my
> document
> may be sitting in a different datasource each time.
[...]
At what point of time will you be aware of which datasource
the field is coming from? How are yo
I currently have a DIH that is working in terms of being able to
search/filter on various facets, but I'm struggling to figure out how to
take it to the next level of what I'd like ideally.
We have a database where the "atomic" unit is a condition (like an
environment description - temp, light, h
I've noticed that performing a query with facet.mincount=0 and no fq clauses
results in a response where only facets with non-zero counts are returned,
but adding in an fq clause (caused by a user selecting a non-zero-valued
facet value checkbox) actually causes a bunch of 0-count facet values
com
Hi,
I am in a situation where the data needed for one of the fields in my
document
may be sitting in a different datasource each time.
I would like to be able to configure something like this:
http://lucene.472066.n3.nabble.com/Resolve-a-DataImportHandler-datasource-based-on-previous-entity-tp22
NOT sure about any of it, but THINK that READ ONLY, with one solr instance
doing
writes is possible. I've heard that it's NEVER possible to do multiple Solr
Instances writing.
Dennis Gearon
Signature Warning
It is always a good idea to learn from your own mistakes. It is usu
Hello,
Is it possible to deploy multiple solr instances with different
context roots pointing to the same solr core ? If I do this will there be
any deadlocks or file handle issues ? The reason I need this setup is
because I want to expose solr to an third party vendor via a different
conte
Hello,
I'm investigating an issue where spellcheck queries are tokenized without
being explicitly told to do so, resulting in suggestions such as
"www.www.product4sale.com.com" for the queries such as
"www.product4sale.com".
The default RegexFragmenter fragmenter (name="regex") uses the regular
I'm not quite sure whether your question is answered or not, so ignore me if
it is...
But I'm having trouble envisioning this part
"they can use
the dblocation field to retrieve the data for editing purposes (and then
re-index it following their edits)."
I'd never, ever, ever let a user edit the
On Jan 11, 2011, at 9:37 AM, stockii wrote:
>
> simplest solution is more RAM !?
>
> sometimes i think, that is a standard solution for problems with solr ;-)
FWIW, it's a solution for most computing problems, right?
>
> i going to buy 100 GB RAM :P
That won't do it. More RAM is sometime
Thanks for your answer,
It's not a disk space problem here :
# df -h
FilesystemSize Used Avail Use% Mounted on
/dev/sda4 280G 22G 244G 9% /
We will try to install solr on a different server (We just need a little
time for that)
Stéphane
Le 11/01/2011 15:42,
Stéphane,
I've only seen production index corruption when during merge the
process ran out of disk space, or there is an underlying hardware
related issue.
On Tue, Jan 11, 2011 at 5:06 AM, Stéphane Delprat
wrote:
> Hi,
>
>
> I'm using Solr 1.4.1 (Lucene 2.9.3)
>
> And some segments get corrupted
simplest solution is more RAM !?
sometimes i think, that is a standard solution for problems with solr ;-)
i going to buy 100 GB RAM :P
--
View this message in context:
http://lucene.472066.n3.nabble.com/Tuning-StatsComponent-tp2225809p2234557.html
Sent from the Solr - User mailing list arch
(11/01/11 20:49), Frederico Azeiteiro wrote:
Hi all,
I had indexed a text with the word "InterContinental" with fieldType
text (with the default filters just removing the
solr.SnowballPorterFilterFactory).
As far as I understand, using the filter solr.WordDelimiterFilterFactory
with splitOn
On Jan 10, 2011, at 10:57 PM, TxCSguy wrote:
>
> Hi,
>
> I'm not sure if this question is better posted in Solr - User or Solr - Dev,
> but I'll start here.
>
> I'm interested to find some documentation that describes in detail how
> synonym expansion is handled at index time.
> http://www.l
Hi,
I'm using Solr 1.4.1 (Lucene 2.9.3)
And some segments get corrupted:
4 of 11: name=_p40 docCount=470035
compound=false
hasProx=true
numFiles=9
size (MB)=1,946.747
diagnostics = {optimize=true, mergeFactor=6,
os.version=2.6.26-2-amd64, os=Linux, mergeDocStores=true,
This might be the solution.
http://lucene.apache.org/java/3_0_2/api/contrib-misc/org/apache/lucene/queryParser/analyzing/AnalyzingQueryParser.html
2011/1/11 Matti Oinas :
> Sorry, the message was not meant to be sent here. We are struggling
> with the same problem here.
>
> 2011/1/11 Matti Oinas
On Jan 10, 2011, at 5:04 PM, lee carroll wrote:
> Hi Grant,
>
> Its a search relevancy problem. For example:
>
> a document about london reads like
>
> London is not very good for a peaceful break.
>
> we analyse this at the (i can't remember the technical term) is it lexical
> level? (bloody
Satya,
what about rows=0 .. if i got i correct .. :)
Regards
Stefan
On Tue, Jan 11, 2011 at 1:19 PM, satya swaroop wrote:
> Hi Gora,
> I am using solr for file indexing and searching, But i have a
> module where i dont need any files result but only the spell suggestions,
> so
> i ask
Sorry, the message was not meant to be sent here. We are struggling
with the same problem here.
2011/1/11 Matti Oinas :
> http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#Analyzers
>
> On wildcard and fuzzy searches, no text analysis is performed on the
> search word.
>
> 2011/1/11 Kári
http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#Analyzers
On wildcard and fuzzy searches, no text analysis is performed on the
search word.
2011/1/11 Kári Hreinsson :
> Hi,
>
> I am having a problem with the fact that no text analysis are performed on
> wildcard queries. I have the
Hi Gora,
I am using solr for file indexing and searching, But i have a
module where i dont need any files result but only the spell suggestions, so
i asked is der anyway in solr where i would get the spell suggestion
responses only.. I think it is clear for u now.. If not tell me I will
Hi,
I am having a problem with the fact that no text analysis are performed on
wildcard queries. I have the following field type (a bit simplified):
My problem has to do with Icelandic characters, when I index a document with a
text f
Hi all,
I had indexed a text with the word "InterContinental" with fieldType
text (with the default filters just removing the
solr.SnowballPorterFilterFactory).
As far as I understand, using the filter solr.WordDelimiterFilterFactory
with splitOnCaseChange="1", this word is indexed as:
Problem was incorrect pk definition on data-config.xml
pk attribute needs to be the same as Solr uniqueField, so in my case
changing pk value from id to uuid solved the problem.
2010/12/7 Matti Oinas :
> Thanks Koji.
>
> Problem seems to be that template transformer is
Look at Solr Function Queries they might help you
-
Grijesh
--
View this message in context:
http://lucene.472066.n3.nabble.com/pruning-search-result-with-search-score-gradient-tp2233760p2233773.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi everyone,
I would like to be able to prune my search result by removing the less
relevant documents. I'm thinking about using the search score : I use
the search scores of the document set (I assume there are sorted by
descending order), normalise them (0 would be the the lowest value and 1
You have to explicitly call
http://:/solr/dataimport?command=delta-import to start
delta-import.
You can setup it as cronjob.
As per my knowledge dataimport.properties file contains parameter to be used
in DtataImport xml configuration file for example database related file
contains text as
#W
On Tue, Jan 11, 2011 at 10:06 AM, Dinesh wrote:
>
> can u give an example.. like something that is currently being used..
Sorry, I do not have anything like this at hand at the moment.
>
> i'am an
> engin
Hi,
Absolutely this problem is the main scope of NLP. To handle not (negative),
passive, tense (pass, future, ...) need more advanced linguistic analyse
(morpho-syntax) in phraseology than a sample tokenize with stem or lemm
enhances. The output of this analyse kind is normally a tree-like stru
Just to be more explicit in terms of using synonyms. Our thinking was
something like:
1 analyse texts for patterns such as not x and list these out
2 in a synonyms txt file list in effect antonyms eg
not pretty -> Ugly
not ugly -> pretty
not lively -> quiet
not very nice ->
On Tue, Jan 11, 2011 at 3:07 PM, satya swaroop wrote:
> Hi All,
> can we get just suggestions only without the files response??
> Here I state an example
> when i query
> http://localhost:8080/solr/spellcheckCompRH?q=java daka
> usar&spellcheck=true&spellcheck.count=5&spellcheck.collate
Hi All,
can we get just suggestions only without the files response??
Here I state an example
when i query
http://localhost:8080/solr/spellcheckCompRH?q=java daka
usar&spellcheck=true&spellcheck.count=5&spellcheck.collate=true
i get some result of java files and then the suggestions f
Hi,
Is there any way one can define proprieties for a function plugin
extending the ValueSourceParser inside solrconfig.xml (as one can do with
the "defaults" attribute for a query parser plugin inside the request
handler)?
Thanks,
Dante
ah, cool thx =)
--
View this message in context:
http://lucene.472066.n3.nabble.com/unequal-in-fq-tp2233235p2233261.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi,
It works just like boolean operators in the main query:
fq=-status:refunded
http://lucene.apache.org/java/2_9_1/queryparsersyntax.html#Boolean operators
Cheers
> hello.
>
> i need to filter a field. i want all fields are not like the given string.
>
> eg.: ...&fq=status!=refundend
>
> h
hello.
i need to filter a field. i want all fields are not like the given string.
eg.: ...&fq=status!=refundend
how can i realize this in solr !? i dont want to use
...string+OR+string+OR+...
--
View this message in context:
http://lucene.472066.n3.nabble.com/unequal-in-fq-tp2233235p2233235.h
66 matches
Mail list logo