The files in solr/example/solr/conf are an example of how to do a
schema. You want the 'location' type for your lat/long data. This is
one field storing both values with some custom geographic search code.
On Wed, May 9, 2012 at 10:28 PM, Jack Krupansky wrote:
> And you will have to define a "tex
All of the "text_en", "text_es" entries in the schema.xml are "field
types". These field types are different ways of parsing and searching
free text appropriate for English, Spanish etc.
You have to say "text_en" instead of "text" as the field type. This
will do a good job for searching English la
And you will have to define a "text" field type, or use one of the existing
text field types, such as "text_general" in the example Solr schema.
-- Jack Krupansky
-Original Message-
From: Spadez
Sent: Wednesday, May 09, 2012 9:59 AM
To: solr-user@lucene.apache.org
Subject: Newbie Trie
The query treatment is probably correct, because the default operator is
"AND", so when "and" gets treated as a stop word and ignored the default
operator is still AND, but when "or" is treated as a stop word and ignored
the operator changes from "OR" to the default implicit "AND".
-- Jack Kru
I don't personally know the details, but I heard somebody at the conference
say that you could hit some solr admin stats URL to access some MBeans stat
that tells you whether there are pending documents that are not yet
committed.
I see a reference to "docsPending" mentioned here:
http://lucid
You can also add a copyField to your schema to copy from "*_s" (or whatever
schema fields you are storing your strings in) to the "text" field (which as
type of "text_general" or something similar.) Best to do separate copyFields
for only the specific string fields that have text you want to sea
I am in a E-Comerce project right now, and I have a requirement like this :
I have a lot of commodities in my SOLR indexes, commodity has the price
field, now I want to do facet range query,
I refer to the solr wiki, the facet range query need specify
*facet.range.gap* or specify *facet.range.spec
some sample codes:
QueryParser parser=new QueryParser(Version.LUCENE_36, "title",
new
KeywordAnalyzer());
String q="+title:hello\\ world";
Query query=parser.parse(q);
System.out.pri
Thanks, Jan.
For now, I would go for the quick solution and just have something that
removes and|or before sending the query to Solr.
The issue in https://issues.apache.org/jira/browse/SOLR-3086 SOLR-3086 is
not what I need. I don't want to totally disable boolean operators, just
limit them to
+ before term is correct. in lucene term includes field and value.
Query ::= ( Clause )*
Clause ::= ["+", "-"] [ ":"] ( | "(" Query ")" )
<#_TERM_CHAR: ( <_TERM_START_CHAR> | <_ESCAPED_CHAR> | "-" | "+" ) >
<#_ESCAPED_CHAR: "\\" ~[] >
in lucene query syntax, you can't express a term value i
Thank you for the feedback. Yes they are uses for geospacial. After doing a
bit of homework I found this correction. Is this how it should be done?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Newbie-tries-to-make-a-Schema-xml-tp3974200p3975458.html
Sent from the S
Hi Otis,
I was not so much trying to find estimates but trying to indicate if it was
done.
I understand the indexing works in batches after which there's a commit
followed by a warm-phase: if my add could be responded with a "commit id" and
that one could check that this commit is now available
I have imported data from a database. When i set a type different than string
solr throws error: Unknown fieldtype 'text' specified on field biog at
org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:511)
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-query-i
There is a lat/long type and geosearch queries for it. Did you plan to
use that? See the solr/example schemas for use of geosearch.
On Wed, May 9, 2012 at 6:59 AM, Spadez wrote:
> Hi,
>
> I’m totally out of my depth here but I am trying, so I apologise if this is
> a bit of a basic question. I ne
Can you give me any example on how to do this?
I am really stuck
Thank you in advance
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-query-issues-tp3974922p3975384.html
Sent from the Solr - User mailing list archive at Nabble.com.
Another option is to remove autowarming, and instead create a small
bunch of queries that go most of the way. If you sort on a field, do
that sort; facet queries also. This will load the basic Lucene data
structures. Also, just getting the index data loaded into the OS disk
cache helps a lot.
On W
Hi Chris,
I think there is some confusion here.
When people say things about relevance scores they talk about comparing them
across queries.
What you have is a different situation, or at least a situation that lends
itself to working around this, at least partially.
You have N users.
Each user
On 5/9/2012 7:01 AM, richard.pog...@holidaylettings.co.uk wrote:
We are testing an updated version of our Solr server running solr 3.5.0 and we
are experiencing some performance issues with regard to updates and commits.
Searches are working well.
There are approximately 80,000 documents and t
Paul,
Are you asking how to figure out the time between "add doc" and "see doc"?
I suppose it could be useful to have Solr expose info about "how much time
until the next autocommit" and then you could add that to the warmup time from
previous warming and estimate.
Otis
Performance Monito
Hi Richard,
> An attempt to add a single document and commit is taking many minutes but the
> time taken is not consistent.
Are you committing after every doc? If so, don't do it. :) Check Solr ML
archives (e.g. http://search-lucene.com/ ) for past discussions on this topic.
Do you have any
Well, that's not gonna work, because your search field is just a string, so
a literal search against it would only work. You should instead define the
search field type to have analyzers and / or tokenizers, in case of course
it contains some form of to be searchable text. Have a look at text_en, f
Yes i have already done it!
The schema.xml file is:
Yes.
See http://wiki.apache.org/solr/SolrQuerySyntax - The standard Solr Query
Parser syntax is a superset of the Lucene Query Parser syntax.
Which links to http://lucene.apache.org/core/3_6_0/queryparsersyntax.html
Note - Based on the info on these pages I believe the "+" symbol is to be
p
Hi,
Have you defined your default search field in the schema.xml? If not or in
doubt, just prefix your query specifically with a field name, smth like
q=search_field_name:word
-- Dmitry
On Wed, May 9, 2012 at 9:12 PM, anarchos78 wrote:
> Hello,
> I have successfully installed “Solr 3.6” over “T
Hello,
I have successfully installed “Solr 3.6” over “Tomcat” (inside a folder
under C:\ I have 2 subfolders: tomcat-“tomcat” installation and solr-“solr”
home). I have copied the solr folder from the “examples”. Then I tried to
index data from database. The indexing was successful. But I have a se
you will have to create the directory and configs yourself .you will need
to call the command once you create the directory and give permissions ,the
following url only creates the data folder and makes an entry in solr.xml
Refer :http://blog.dustinrue.com/archives/690
Regards
Sujahta
On Wed, Ma
Hi :)
I remember that in a Lucene query, there is something like mandatory
values. I just have to add a "+" symbol in front of the mandatory
parameter, like: +myField:my value
I was wondering if there was something similar in Solr queries? Or is
this behaviour activated by default?
Gary
my setup includes a asynchron replication.
this means, both are master AND slave at the same time. so i can easy switch
master and slave on the fly without resarting any server with mass of
scripts ... i trigger a replication via cronjob and look everytime, if
server is master or slave. only slave
my setup includes a asynchron replication.
this means, both are master AND slave at the same time. so i can easy switch
master and slave on the fly without resarting any server with mass of
scripts ... i trigger a replication via cronjob and look everytime, if
server is master or slave. only slave
> so can't I ask here?
You _can_ post it here, but it is not just off-topic, it is not-topic.
If you post, don't expect happy responses and don't be surprised with
unhappy meta responses.
You _shouldn't_ post here concerning nutch.
Sorry, domain-specific lists are narrow for a reason.
You should
+1 as well especially for larger indexes
Sent from my Mobile device
720-256-8076
On May 9, 2012, at 9:46 AM, Jan Høydahl wrote:
>> I think we have to add this for java based rep.
> +1
>
Dear list,
I recently figured out that the FrenchLightStemFilterFactory performs
some interestingly undocumented normalization on tokens...
There's a norm() helper called for each produced token that performs,
amongst other things, deletions on repeated characters... Only for
tokens with mor
Why would you replicate data import properties? The master does the importing
not the slave...
Sent from my Mobile device
720-256-8076
On May 9, 2012, at 7:23 AM, stockii wrote:
> Hello.
>
>
> i running a solr replication. works well, but i need to replicate my
> dataimport-properties.
>
>
Hi,
I’m totally out of my depth here but I am trying, so I apologise if this is
a bit of a basic question. I need the following information to be indexed
and then made searchable by Solr:
Title – A title for the company
Company – The name of the company
Description – A description of the company
> I think we have to add this for java based rep.
+1
Hi,
eDismax does its own query parsing before shipping the terms to Analysis (which
is responsible for stopword removal). That's why these are not treated as
stopwords. The quickest solution for you is probably to remove (or|OR|and|AND)
before sending the query to Solr.
Also see SOLR-3086 for
Afternoon,
We are testing an updated version of our Solr server running solr 3.5.0 and we
are experiencing some performance issues with regard to updates and commits.
Searches are working well.
There are approximately 80,000 documents and the index is about 2.5 GB. This
does not seem to be ext
For such an alerting service, I would make it a requirement that it's WYSIWYG -
e.g. let the user enter a search, and then refine it through facets, filters,
ranges etc until he is satisfied with ALL the results returned. Do not rely on
relevane here, but sort the results by date or similar. You
What command you are using in your cron on the slave to only rebuild the
spellcheck index?
I have only found the option to query the slave for dummy string and attache
it as URL attribute the "&spellcheck.build=true".
E.g.
slave-solr:8983/solr/my-index/spell/?q=helllo&spellcheck.build=true&wt=xml
On Wed, May 9, 2012 at 3:26 PM, pravesh wrote:
>>While n being a higher value, firing 100 cores wouldn't be a viable
>>solution. How do I achiever this in solr, in short I would like to
>>have a single core and get results out of multiple index searchers and
>>that implies multiple index readers.
Hello.
i running a solr replication. works well, but i need to replicate my
dataimport-properties.
if server1 replicate this file after he create everytime a new file, with
*.timestamp, because the first replication run create this file with wrong
permissions ...
how can is say to solr replica
>> My question is, is it possible to run
>> multiple combination of search queries to just get only result count "in
a
>> single trip" without using "facet.query"?
>>
>
> No. AFAIK.
Yes, you're true. I just tried googling on this and I'm finding that a
requirement similar to mine is being filed u
Hello SOLR experts,
I have my own indexing web-application which talks in XML to SOLR. It works
wonderfully well.
The queue is displayed in the indexer, so that experts can have a track that it
went well into the index.
However, i see no way currently to display that solr's searcher includes t
>While n being a higher value, firing 100 cores wouldn't be a viable
>solution. How do I achiever this in solr, in short I would like to
>have a single core and get results out of multiple index searchers and
>that implies multiple index readers.
When you'd want to have single core with multiple
Hi ,
I have a requirement to create cores for each customers. I tried
creating cores using the below code
CoreAdminRequest.Create create = new
CoreAdminRequest.Create();
CoreAdminRequest.createCore(indexName+i,
"C://solr/",
solr);
It c
Hi
I tried to create cores dynamically using the below code,
CoreAdminResponse statusResponse = CoreAdminRequest.getStatus(indexName,
solr);
coreExists =
statusResponse.getCoreStatus(indexName).size() >
0;
System.out.println("got the
cor
Thanks Lance
There is already a clear partition - as you assumed, by date.
My requirement is for the best setup for:
1. A *single machine*
2. Quickly changing index - so i need to have the option to load and unload
partitions dynamically
Do you think that the sharding model that solr offers is t
Otis,
I've just subscribed to nutch mailing list, however it's a very
low-volume one (at least that's what I came across), so can't I ask here?
Regards,
On 5/8/12 11:54 PM, Otis Gospodnetic wrote:
Tolga - you should ask on the Nutch mailing list, not Solr one. :)
Otis
Performance Mon
I'm trying to think through a Solr-based email alerting engine that
would have the following properties:
1. Users can enter queries they want to be alerted on, and the syntax
for alert queries should be the same syntax as my regular solr
(dismax) queries.
1a. Corollary: Because of not just tf-idf
Hi Dave,
I tried to create core programmatically as below. But getting
following error.
CoreAdminResponse statusResponse =
CoreAdminRequest.getStatus(indexName, solr);
coreExists =
statusResponse.getCoreStatus(indexName).size() >
0;
Am 08.05.2012 23:23, schrieb Lance Norskog:
Lucene does not support more 2^32 unique documents, so you need to
partition.
Just a small note:
I doubt that Solr supports more than 2^31 unique documents, as most
other Java applications that use int values.
Greetings,
Kuli
This pages gives you everything you need
http://wiki.apache.org/solr/CoreAdmin#CREATE
Regards,
Dave
On 9 May 2012, at 08:32, pprabhcisco123 wrote:
> Hi,
>
>
> I am trying to create core dynamically. what are the configuration
> steps that needs to be followed to do the same. Please l
Hi,
I am trying to create core dynamically. what are the configuration
steps that needs to be followed to do the same. Please let me know if you
have any idea on that
Thanks
Prabhakarn.P
--
View this message in context:
http://lucene.472066.n3.nabble.com/Configuration-steps-to-create
Dear all,
I am using solr for log search.
During every search, based on the input request, I will have to search
through n index directories dynamically. n may range from 1 to 100.
While n being a higher value, firing 100 cores wouldn't be a viable
solution. How do I achiever this in solr, in sh
54 matches
Mail list logo