My use case is I want to search for any substring of the indexed string and
the Suggester should suggest the indexed string. What can I do to make this
work?
Thanks,
Prathik
On Thu, Jun 6, 2013 at 2:05 AM, Mikhail Khludnev wrote:
> Please excuse my misunderstanding, but I always wonder why thi
Hi,
Is it possible to create a query similar in function to multiple
SQL group by clauses?
I have documents that have a single valued fields for host name
and collection name and would like to group the results by both e.g. a result
would contain a count of the docu
On 6/5/2013 11:25 PM, TwoFirst TwoLast wrote:
> 1) If I change one field's type in my schema, will that cause problems with
> the index or searching? My data is pulled in chunks off of a mysql server
> so one field in the currently indexed data is simply an "int" type field in
> solr. I would lik
1) If I change one field's type in my schema, will that cause problems with
the index or searching? My data is pulled in chunks off of a mysql server
so one field in the currently indexed data is simply an "int" type field in
solr. I would like to change this to a string moving forward, but still
Patrick-
I found the problem with multiple documents. The problem was that the
API for the life cycle of a Tokenizer changed, and I only noticed part
of the change. You can now upload multiple documents in one post, and
the OpenNLPTokenizer will process each document.
You're right, the exampl
On Wed, Jun 5, 2013 at 6:11 PM, Chris Hostetter
wrote:
> and think that conceptually it
> doesn't make sense for facet.missing to consider facet.mincount.
+1
"facet.missing" asks for the missing count - regardless of what it is.
Although it might make sense in some use cases to make facet.missin
That's what SolrCloud was invented for - fully-distributed indexing where
any update can be sent to any node.
With non-SolrCloud distributed Solr, YOU, the developer are responsible for
figuring out what node a document should be sent to, both for updates and
for the original insertion. The po
I have 5 shards that has different data indexed in them (each document has a
unique id).
Now when I perform dynamic updates (push indexing) I need to update the
document corresponding to the unique id that is needs to be updated but I
wont know which core that corresponding document is present in.
hi
I've successfully searched over several separate collections (cores
with unique schemas) using this kind of syntax. This demonstrates a 2
core search
http://localhost:8983/solr/collection1/select?
q=my phrase to search on&
start=0&
rows=25&
fl=*,score&
fq={!join+fromIndex=collection2+from=
I am not sure the best way to search across multiple collection using SOLR
4.3.
Suppose, each collection have their own config files and I perform various
operations on collections individually but when I search I want the search
to happen across all collections. Can someone let me know how to pe
Might not be a solution but I had asked a similar question before..Check out
this thread..
http://lucene.472066.n3.nabble.com/Is-there-a-way-to-load-multiple-schema-when-using-zookeeper-td4058358.html
You can create multiple collection and each collecion can use completley
differnet sets of conf
I tested using the new geospatial class, works fine with new spatial type
using class="solr.SpatialRecursivePrefixTreeFieldType"
http://wiki.apache.org/solr/SolrAdaptersForLuceneSpatial4
you can dynamically set the boolean value by using script transformer when
indexing the data. you dont really
I used to face this issue more often when I used CachedSqlEntityProcessor in
DIH.
I then started indexing in batches (by including where condition) instead of
indexing everything at once..
You can refer to other available options for mysql driver
http://dev.mysql.com/doc/refman/5.0/en/connector
Thanks for the replies.
I found that -location_field:* returns documents that both have and don't
have the field set.
I should clarify that I am using Solr 3.4
the location type is set to solr.LatLonType
Although I could add a boolean field that is true if location is set I'd
rather not have redu
: Furthermore, I have realized that the issue is with MySQL as its not
: processing this table when a "where" is applied
http://wiki.apache.org/solr/DataImportHandlerFaq#I.27m_using_DataImportHandler_with_a_MySQL_database._My_table_is_huge_and_DataImportHandler_is_going_out_of_memory._Why_doe
: I updated the Index using SolrJ and got the exact same error message
there aren't a lot of specifics provided in this thread, so this may not
be applicable, but if you mean you actaully using the "atomic updates"
feature to update an existing document then the problem is that you still
have
Either have your update client explicitly set a boolean field that indicates
whether location is present, or use an update processor to set an explicit
boolean field that means no location present:
location_field
has_location_b
has_location_b
[^\s]+
true
has_locatio
select?q=*-location_field:** worked for me
--
View this message in context:
http://lucene.472066.n3.nabble.com/search-for-docs-where-location-not-present-tp4068444p4068452.html
Sent from the Solr - User mailing list archive at Nabble.com.
: filter again with the same facet. Also, when a facet has only one value, it
: doesn't make sense to show it to the user, since searching with that facet
: is just going to give the same result set again. So when facet.missing does
: not work with facet.mincount, it is a bit of a hassle for us...
On 6/5/2013 2:11 PM, ethereal wrote:
Hi,
I've tested a query using solr admin web interface and it works fine.
But when I'm trying to execute the same search using solrj, it doesn't
include Stats information.
I've figured out that it's because my query is encoded.
Original query is like q=eventT
A Solr index does not need a unique key, but almost all indexes use one.
http://wiki.apache.org/solr/UniqueKey
Try the below query passing id as id instead of titleid..
A proper dataimport config will look like,
I have a location-type field in my schema where I store lat / lon of a
document when this data is available. In around half of my documents this
info is not available and I just don't store anything.
I am trying to find the documents where the location is not set but nothing
is working.
I tried
Your problem statement is fairly odd. You say
you've defined "object" as a stopword, but then
you want your query to return documents that
contain "object". By definition stopwords are
something that is considered irrelevant for searching
and are ignored.
So why not just take "object" out of your
My usual admonishment is that Solr isn't a database, and when
you try to use it like one you're just _asking_ for problems. That
said
Consider two options:
1> use a different core for each table.
2> in schema.xml, remove the id field (required="true" _might_ be specified)
: I've tested a query using solr admin web interface and it works fine.
: But when I'm trying to execute the same search using solrj, it doesn't
: include Stats information.
: I've figured out that it's because my query is encoded.
I don't think you are understading how to use SolrJ andthe SolrQu
Note that stored=true/false is irrelevant to the raw search time.
What it _is_ relevant to is the time it takes to assemble the doc
for return, if (and only if) you return that field. I claim your search
time would be fast if you went ahead and stored the field,
and specified an fl clause that did
To add some numbers to adityab's comment.
Each entry in your filter cache will probably consist
of maxDocs/8 bytes plus some overhead. Or about 16G.
This will only grow as you fire queries at Solr, so
it's no surprise you're running out of memory as you
process queries.
Your documentCache is prob
Sounds like the Solr Admin UI is too-aggressively encoding the query part of
the URL for display. Each query parameter value needs to be encoded, not the
entire URL query string as a whole.
-- Jack Krupansky
-Original Message-
From: ethereal
Sent: Wednesday, June 05, 2013 4:11 PM
To:
Please excuse my misunderstanding, but I always wonder why this index time
processing is suggested usually. from my POV is the case for query-time
processing i.e. PrefixQuery aka wildcard query Jason* .
Ultra-fast term retrieval also provided by TermsComponent.
On Wed, Jun 5, 2013 at 8:09 PM, Jac
Hi,
I've tested a query using solr admin web interface and it works fine.
But when I'm trying to execute the same search using solrj, it doesn't
include Stats information.
I've figured out that it's because my query is encoded.
Original query is like q=eventTimestamp:[2013-06-01T12:00:00.000Z TO
2
On 6/5/2013 10:05 AM, Mark Miller wrote:
Sounds like a bug - we probably don't have a test that updates a link - if you
can make a JIRA issue, I'll be happy to look into it soon.
I will go ahead and create an issue so that a test can be built, but I
have some more info: It works perfectly whe
Check out this
http://stackoverflow.com/questions/5549880/using-solr-for-indexing-multiple-languages
http://wiki.apache.org/solr/LanguageAnalysis#French
French stop words file (sample):
http://trac.foswiki.org/browser/trunk/SolrPlugin/solr/multicore/conf/stopwords-fr.txt
Solr includes three stem
Hi,
I am using the standard edismax parser and my example query is as follows:
{!edismax qf='object_description ' rows=10 start=0 mm=-40% v='object'}
In this case, 'object' happens to be a stopword in the StopWordsFilter in my
datatype 'object_description'. Now, since 'object' is not indexe
Thanks for the hints.
I am not sure how to solve this issue. I previously made a typo, there
are definetly two different tables.
Here is my real configuration:
http://pastebin.com/JUDzaMk0
For testing purposes I added "LIMIT 10" to the SQL-statements because my
tables are very huge and tests
Good call Jack. I totally missed that. I am curious how dataimport handler
worked before – if I made a mistake in the specification and it did not get
the jar. Anyway, it works now. Thanks again.
O.O.
"apache-solr-dataimporthandler-.*\.jar" - note that the "apache-" prefix has
been removed from
Look in the Solr log - the error message should tell you what the multiple
values are. For example,
95484 [qtp2998209-11] ERROR org.apache.solr.core.SolrCore –
org.apache.solr.common.SolrException: ERROR: [doc=doc-1] multiple values
encountered for non multiValued field content_s: [def, abc]
On 6 June 2013 00:09, Stavros Delisavas wrote:
>
> Thanks so far.
>
> This change makes Solr work over the title-entries too, yay! Unfortunatly
> they don't get processed(skipped rows). In my log it says
> "missing required field id" for every entry.
>
> I checked my schema.xml. In there "id" is n
On Jun 5, 2013, at 20:39 , Stavros Delisavas wrote:
> Thanks so far.
>
> This change makes Solr work over the title-entries too, yay! Unfortunatly
> they don't get processed(skipped rows). In my log it says
> "missing required field id" for every entry.
>
> I checked my schema.xml. In there "id
Thanks so far.
This change makes Solr work over the title-entries too, yay!
Unfortunatly they don't get processed(skipped rows). In my log it says
"missing required field id" for every entry.
I checked my schema.xml. In there "id" is not set as a required field.
removing the uniquekey-propert
This may be more suitable on the dev-list, but distributed pivot facets is
a very powerful feature. The Jira issue for this is SOLR-2894 (
https://issues.apache.org/jira/browse/SOLR-2894). I have done some testing
of the last patch for this issue, and it is as Andrew says: Everything but
datetime f
On Wed, Jun 5, 2013 at 9:04 PM, Eustache Felenc
wrote:
> There is also http://wiki.apache.org/solr/SolrRelevancyCookbook with nice
> examples.
>
Thank you.
--
Dotan Cohen
http://gibberish.co.il
http://what-is-what.com
That was a very silly mistake. I forgot to add the values to array before
putting it inside row..the below code works.. Thanks a lot...
--
View this message in context:
http://lucene.472066.n3.nabble.com/java-lang-NumberFormatException-when-adding-latitude-longitude-using-DIH-tp4068223p40
There is also http://wiki.apache.org/solr/SolrRelevancyCookbook with
nice examples.
On 06/05/2013 12:13 PM, Jack Krupansky wrote:
"Is there any other documentation that I should review?"
It's in the works! Within a week or two.
-- Jack Krupansky
-Original Message- From: Dotan Cohen
S
Are you using IE? If so, you might want to try using Firefox.
-Original Message-
From: sathish_ix [mailto:skandhasw...@inautix.co.in]
Sent: Wednesday, June 05, 2013 6:16 AM
To: solr-user@lucene.apache.org
Subject: Sole instance state is down in cloud mode
Hi,
When i start a core in sol
Okey. I'm so sorry. I will not create same task in separate topic next time.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Create-index-on-few-unrelated-table-in-Solr-tp4068054p4068405.html
Sent from the Solr - User mailing list archive at Nabble.com.
So here it is for a record how I am solving it right now:
Write-master is started with: -Dmontysolr.warming.enabled=false
-Dmontysolr.write.master=true -Dmontysolr.read.master=http://localhost:5005
Read-master is started with: -Dmontysolr.warming.enabled=true
-Dmontysolr.write.master=false
solrc
Thanks a lot for your response Hoss.. I thought about using scriptTransformer
too but just thought of checking if there is any other way to do that..
Btw, for some reason the values are getting overridden even though its a
multivalued field.. Not sure where I am going wrong!!!
for latlong values
OK, I have two fields defined as follows:
and this copyField directive
I updated the Index using SolrJ and got the exact same error message
that is in the subject. However, while waiting for feedback I built a
workaround at the application level and now reconstructing the
original state
Hoss,
We rely heavily on facet.mincount because once a user has selected a facet,
it doesn't make sense for us to show that facet field to him and let him
filter again with the same facet. Also, when a facet has only one value, it
doesn't make sense to show it to the user, since searching with that
I have not implemented it yet. And I forget the exact webpage I found. But
there was a person on that page discussing the same problem and said it was
easy to implement a solution for it but he did not share his solution. If
you figure it out let me know.
--
View this message in context:
http:/
> select?defType=edismax&q={!q.op=OR}search_field:term1 term2&pf=search_field
>
Is there any way to perform a fuzzy search with this method? I have
tried appending "~1" to every term in the search like so:
select?defType=edismax&q={!q.op=OR}search_field:term1~1%20term2~1&pf=search_field
However,
Please don't create new threads re-asking the same questions -- especailly
when the existing thread is only a day old, and still actively getting
responses.
it just increases the overall noise of of the list, and results in
multiple people wasting their time providing you with the same answers
Hi,
We have a setup where we have 3 shards in a collection, and each shard in
the collection need to load different sets of data
That is
Shard1- will contain data only for Entity1
Shard2 - will contain data for entity2
shard3- will contain data for entity3
So in this case,. the db-data-config.xml c
: How can I don't overwrite other entities?
: Please assist me on this example.
I'm confused, you sent this in direct reply to my last message, which
contained the following...
1) a paragraph describing the general approach to solving this type of
problem...
>> You can use TemplateTransformer
ngrams won't work here. If I index all the ngrams of the string and when I
try to search for some string it would suggest all the ngrams as well.
Eg:
Dictionary contains the word "Jason Bourne" and you index all the ngrams of
the above word.
When I try to search for "Jason" solr suggests all the ng
Try creating a composite key that includes schema name as part of the
key. Otherwise, what do you actually expect to happen if all tables
have ID=1. What (single) entry do you expect to end up in Solr?
Regards,
Alex.
Personal blog: http://blog.outerthoughts.com/
LinkedIn: http://www.linkedin.co
Hehe.
Yes my all tables ID field names are different.
For example:
I have 5 table. These names are 'admin, account, group, checklist'
admin=>id ->uniquekey
account=>account_id ->uniquekey
group=>group_id ->uniquekey
checklist=>id->uniquekey
Also I thought last entity overwrite other entities.
Maybe problem is two document declare in data-config.xml.
You will try change this one.
--
View this message in context:
http://lucene.472066.n3.nabble.com/data-import-problem-tp4068306p4068373.html
Sent from the Solr - User mailing list archive at Nabble.com.
Yes. My ID field is uniquekey. How can I don't override each other?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Create-index-on-few-unrelated-table-in-Solr-tp4068054p4068371.html
Sent from the Solr - User mailing list archive at Nabble.com.
"Is there any other documentation that I should review?"
It's in the works! Within a week or two.
-- Jack Krupansky
-Original Message-
From: Dotan Cohen
Sent: Wednesday, June 05, 2013 12:06 PM
To: solr-user@lucene.apache.org
Subject: Re: Phrase matching with set union as opposed to se
ngrams?
See:
http://lucene.apache.org/core/4_3_0/analyzers-common/org/apache/lucene/analysis/ngram/NGramFilterFactory.html
-- Jack Krupansky
-Original Message-
From: Prathik Puthran
Sent: Wednesday, June 05, 2013 11:59 AM
To: solr-user@lucene.apache.org
Subject: Configuring lucene to
/So we see the jagged edge waveform which keeps climbing (GC cycles don't
completely collect memory over time). Our test has a short capture from
real traffic and we are replaying that via solrmeter./
Any idea why the memory climbs over time. The GC should cleanup after data
is shipped back. Co
"apache-solr-dataimporthandler-.*\.jar" - note that the "apache-" prefix has
been removed from Solr jar files.
-- Jack Krupansky
-Original Message-
From: O. Olson
Sent: Wednesday, June 05, 2013 12:01 PM
To: solr-user@lucene.apache.org
Subject: No files added to classloader from lib
H
On Wed, Jun 5, 2013 at 6:23 PM, Jack Krupansky wrote:
> term1 OR term2 OR "term1 term2"^2
>
> term1 OR term2 OR "term1 term2"~10^2
>
> The latter would rank documents with the terms nearby higher, and the
> adjacent terms highest.
>
> term1 OR term2 OR "term1 term2"~10^2 OR "term1 term2"^20 OR "te
Hi Peter,
Thank you, I am glad to read that this usecase is not alien.
I'd like to make the second instance (searcher) completely read-only, so I
have disabled all the components that can write.
(being lazy ;)) I'll probably use
http://wiki.apache.org/solr/CollectionDistribution to call the curl
On Wed, Jun 5, 2013 at 6:10 PM, Shawn Heisey wrote:
> On 6/5/2013 9:03 AM, Dotan Cohen wrote:
>> How would one write a query which should perform set union on the
>> search terms (term1 OR term2 OR term3), and yet also perform phrase
>> matching if both terms are found? I tried a few variants of t
Sounds like a bug - we probably don't have a test that updates a link - if you
can make a JIRA issue, I'll be happy to look into it soon.
- Mark
On Jun 4, 2013, at 8:16 AM, Shawn Heisey wrote:
> I've got Solr 4.2.1 running SolrCloud. I need to change the config set
> associated with a collect
Hi,
I downloaded Solr 4.3 and I am attempting to run and configure a
separate
Solr instance under Jetty. I copied the Solr "dist" directory contents to a
directory called "solrDist" under the single core "db" that I was running. I
then attempted to get the DataImportHandler using the foll
Hi,
Is it possible to configure solr to suggest the indexed string for all the
searches of the substring of the string?
Thanks,
Prathik
Shawn:
You're right, I thought I'd seen it as a option but I think I
was confusing really old solr.
Thanks for catching, having gotten it wrong once I'm sure I'll
remember it better for next time!
Erick
On Tue, Jun 4, 2013 at 1:57 PM, SandeepM wrote:
> Thanks Eric and Shawn,
>
> Your explanat
Guys,
I am going to use the Solr4.3 to my Shopping cart project.
So I need to support my website with two languages(English and French).
So I want some guide for implement the internationalization with the
Slor4.3.
Please guide with some sample configuration to support the French language
with So
term1 OR term2 OR "term1 term2"^2
term1 OR term2 OR "term1 term2"~10^2
The latter would rank documents with the terms nearby higher, and the
adjacent terms highest.
term1 OR term2 OR "term1 term2"~10^2 OR "term1 term2"^20 OR "term2 term1"^20
To further boost adjacent terms.
But the edismax
Try describing your own symptom in your own words - because his issue
related to Solr 1.4. I mean, where exactly are you setting
"allowDuplicates=false"?? And why do you think it has anything to do with
adding documents to Solr? Solr 1.4 did not have atomic update, so sending
the exact same doc
Everything is working great now.
Thanks David
On Wed, Jun 5, 2013 at 12:07 AM, David Smiley (@MITRE.org) <
dsmi...@mitre.org> wrote:
> maxDistErr should be like 0.3 based on earlier parts of this discussion
> since
> your data is to one of a couple hours of the day, not whole days. If it
> was
On 6/5/2013 9:03 AM, Dotan Cohen wrote:
> How would one write a query which should perform set union on the
> search terms (term1 OR term2 OR term3), and yet also perform phrase
> matching if both terms are found? I tried a few variants of the
> following, but in every case I am getting set interse
How would one write a query which should perform set union on the
search terms (term1 OR term2 OR term3), and yet also perform phrase
matching if both terms are found? I tried a few variants of the
following, but in every case I am getting set intersection on the
search terms:
select?q={!q.op=OR}t
Sorry for opening a new thread. As i sent first message w/o subscribing the
mailing list, i couldn't find a possible solution to reply original thread.
The messaging stream is attached below.
Actually the requirement came up from such a scenario: We collect some xml
documents from some external r
> We have a number of Jira issues that specifically deal with something
> called "Developer Curb Appeal." I think it's pretty clear that we need
> to tackle a bunch of things we could call "Newcomer Curb Appeal." I can
> work on filing some issues, some of which will address code, some of
> which
I think the suggestion I have seen is that copyField should be
index-only and - therefore - will not be returned. It is primarily
there to make searching easier by aggregating fields or to provide
alternative analyzer pipeline.
Can you make your copyField destination not stored?
Regards,
Alex.
Hello Solr-Friends,
I have a problem with my current solr configuration. I want to import
two tables into solr. I got it to work for the first table, but the
second table doesn't get imported (no errormessage, 0 rows skipped).
I have two tables called name and title and i want to load their fie
I have the exact same problem as the guy here:
http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201105.mbox/%3C3A2B3E42FCAA4BF496AE625426C5C6E4@Wurstsemmel%3E
AFAICS he did not get an answer. Is this a known issue? What can I do
other than doing what copyField should do in my application
If we see the UI of other cloud base softwares like couchbase or riak, they are
more intuitive than solr's UI. Of course the UI is brand new and need a lot of
improvements. Per example the possibility of select a existing config from
zookeeper when you are using the wizard to create a collection
some values in the field are up to a 1M as well
On Wed, Jun 5, 2013 at 7:27 PM, Raheel Hasan wrote:
> ok thanks for the reply The field having values like 60kb each
>
> Furthermore, I have realized that the issue is with MySQL as its not
> processing this table when a "where" is applied.
ok thanks for the reply The field having values like 60kb each
Furthermore, I have realized that the issue is with MySQL as its not
processing this table when a "where" is applied
Secondly, I have turned this field to "*stored=false*" and now the "*select/
*" is fast working again
On Wed, Jun 5, 2013 at 1:48 AM, Aaron Greenspan
wrote:
> I say this not because I enjoy starting flame wars or because I have the time
> to participate in them--I don't. I realize that there's a long history to
> Solr and I am the new kid who doesn't get it. Nonetheless, that doesn't
> change t
On 6/5/2013 3:07 AM, Varsha Rani wrote:
> Hi ,
>
> I am having solr index of 80GB with 1 million documents .Each document of
> aprx. 500KB . I have a machine with 16GB ram.
>
> I am running mlt query on 3-5 fields of theses document .
>
> I am getting solr out of memory problem .
This wiki pag
On 6/5/2013 3:08 AM, Raheel Hasan wrote:
> Hi,
>
> I am trying to index a heavy dataset with 1 particular field really too
> heavy...
>
> However, As I start, I get Memory warning and rollback (OutOfMemoryError).
> So, I have learned that we can use -Xmx1024m option with java command to
> start t
On 6/5/2013 3:46 AM, Raheel Hasan wrote:
> OK thanks... it works... :D
>
> Also I found that we could put both of them and it will also work:
> log4j.rootLogger=INFO, file
> log4j.rootLogger=WARN, CONSOLE
If this completely separates INFO from WARN and ERROR, then you would
want to rethink and pr
On Wed, Jun 5, 2013 at 3:41 PM, Brendan Grainger
wrote:
> Hi Dotan,
>
> I think all you need to do is add:
>
> facet.mincount=1
>
> i.e.
>
> select?q=*:*&fq=tags:dotan-*&facet=true&facet.field=tags&
> rows=0&facet.mincount=1
>
> Note that you can do it per field as well:
>
> select?q=*:*&fq=tags:d
Did you try reducing filter and query cache. They are fairly large too unless
you really need them to be cached for your use cache.
Do you have that many distinct filter queries hitting solr for the size you
have defined for filterCache?
Are you doing any sorting? as this will chew up a lot of memo
On Wed, Jun 5, 2013 at 3:38 PM, Raymond Wiker wrote:
> 3) Use the parameter facet.prefix, e.g, facet.prefix=dotan-. Note: this
> particular case will not work if the field you're facetting on is tokenised
> (with "-" being used as a taken separator).
>
> 4) Use the parameter facet.mincount - looks
Hi,
I have a problem where our text corpus on which we need to do search
contains many misspelled words. Same word could also be misspelled in
several different ways. It could also have documents that have correct
spellings However, the search term that we give in query would always be
correct spe
Hi Dotan,
I think all you need to do is add:
facet.mincount=1
i.e.
select?q=*:*&fq=tags:dotan-*&facet=true&facet.field=tags&
rows=0&facet.mincount=1
Note that you can do it per field as well:
select?q=*:*&fq=tags:dotan-*&facet=true&facet.field=tags&
rows=0&f.tags.facet.mincount=1
http://wiki
3) Use the parameter facet.prefix, e.g, facet.prefix=dotan-. Note: this
particular case will not work if the field you're facetting on is tokenised
(with "-" being used as a taken separator).
4) Use the parameter facet.mincount - looks like you want to set it to 1,
instead of the default which is
Consider the following Solr query:
select?q=*:*&fq=tags:dotan-*&facet=true&facet.field=tags&rows=0
The 'tags' field is a multivalue field. I would expect the previous
query to return only tags that begin with the string 'dotan-' such as:
dotan-home
dotan-work
...but not strings which do not begin
1. SolrCell (ExtractingRequestHandler) - extract and index content from rich
documents, such as PDF, Office docs, HTML (uses Tika)
2. Clustering - for result clustering.
3. Language identification (two update processors) - analyzes text of fields
to determine language code.
None of those is ma
Hi yriveiro,
When i was using document cache size=" 131072", i got exception in 5000-6000
mlt queries.
But once i done document cache size="16384", i got same problem in 1500-2000
mlt queries.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Heap-space-problem-with-mlt-
davers wrote
> I want to elevate certain documents differently depending a a certain fq
> parameter in the request. I've read of somebody coding solr to do this but
> no code was shared. Where would I start looking to implement this feature
> myself?
Davers,
I am also looking into this feature.
Hello Solr-Friends,
I have a problem with my current solr configuration. I want to import
two tables into solr. I got it to work for the first table, but the
second table doesn't get imported (no errormessage, 0 rows skipped).
I have two tables called name and title and i want to load their fie
Hello Solr-Friends,
I have a problem with my current solr configuration. I want to import
two tables into solr. I got it to work for the first table, but the
second table doesn't get imported (no errormessage, 0 rows skipped).
I have two tables called name and title and i want to load their fie
1 - 100 of 114 matches
Mail list logo