Hi Daniel,
you can also consider using negative boosts.
This can't be done with solr, but docs which don't match the metadata
can be boosted.
This might do what you want :
-metadata1:(term1 AND ... AND termN)^2
-metadata2:(term1 AND ... AND termN)^2
.
-metadataN:(term1 AND ... AND termN)^2
al
Hi
In Solr 4.0.0 I used to be able to run with persistent=false (in
solr.xml). I can see
(https://cwiki.apache.org/confluence/display/solr/Format+of+solr.xml)
that persistent is no longer supported in solr.xml. Does this mean that
you cannot run in non-persistent mode any longer, or does it m
Dear Solr-Experts,
I am using Solr for my current web-application on my server successfully.
Now I would like to use it in my second web-application that is hosted
on the same server. Is it possible in any way to create two independent
instances/databases in Solr? I know that I could create anothe
On 23 January 2014 14:06, Stavros Delisavas wrote:
> Dear Solr-Experts,
>
> I am using Solr for my current web-application on my server successfully.
> Now I would like to use it in my second web-application that is hosted
> on the same server. Is it possible in any way to create two independent
>
If you are not worried about them stepping on each other's toes
(performance, disk space, etc), just create multiple collections.
There are examples of that in standard distribution (e.g. badly named
example/multicore).
Regards,
Alex.
Personal website: http://www.outerthoughts.com/
LinkedIn: htt
Hi;
Firstly you should read here and learn the terminology of Solr:
http://wiki.apache.org/solr/SolrTerminology
Thanks;
Furkan KAMACI
2014/1/23 Alexandre Rafalovitch
> If you are not worried about them stepping on each other's toes
> (performance, disk space, etc), just create multiple collec
Thanks for the fast responses. Looks like exactly what I was looking for!
Am 23.01.2014 09:46, schrieb Furkan KAMACI:
> Hi;
>
> Firstly you should read here and learn the terminology of Solr:
> http://wiki.apache.org/solr/SolrTerminology
>
> Thanks;
> Furkan KAMACI
>
>
> 2014/1/23 Alexandre Raf
Which is why it is curious that you did not find it. Looking back at
it now, do you have a suggestion of what could be improved to insure
people find this easier in the future?
Regards,
Alex.
Personal website: http://www.outerthoughts.com/
LinkedIn: http://www.linkedin.com/in/alexandrerafalovit
Hi;
I've written a Search API in front of my SolrCloud. When a user sends a
query it goes to my Search API (that uses Solrj). Query is validated, fixed
and filled with some default parameters that a user can not change and
after that query goes to the SolrCloud.
It allows me to expose my index vi
I didn't know that the "core"-term is associated with this use case. I
expected it to be some technical feature that allows to run more
solr-instances for better multithread-cpu-usage. For example to activate
two solr-cores when two cpu-cores are available on the server.
So in general, I have the
Im my case I need to know the number os unique number os visitors and number of
visits in a period of time.
I need to render the data in a table with pagination. To know the number of
unique elements to calculate the total os pages the only way I found was return
facets=-1.
/yago
—
/Ya
You are right on that one. Collection is the new term. Which is why
basic example is "collection1". Core is the physical representation
and it gets a bit confusing at that level with shards and all that.
The documentation is in a transition.
Regards,
Alex.
Personal website: http://www.outerthou
On Thu, 2014-01-23 at 09:36 +0100, Stavros Delisavas wrote:
> I am using Solr for my current web-application on my server successfully.
> Also I would like to be able to have one state of my development version
> and one state of my production version on my server so that I can do
> tests on my dev
So far, I successfully managed to create a core from my existing
configuration by opening this URL in my browser:
http://localhost:8080/solr/admin/cores?action=CREATE&name=glPrototypeCore&instanceDir=/etc/solr
New status from http://localhost:8080/solr/admin/cores?action=STATUS is:
0
4
/us
Hi Elodie,
Thanks for pointing it out. I have created a Jira for this (
https://issues.apache.org/jira/browse/SOLR-5658 )
You could track the progress of it there.
On Wed, Dec 11, 2013 at 3:11 PM, Elodie Sannier wrote:
> Hello,
>
> I am using SolrCloud 4.6.0 with two shards, two replicas by s
Yeah, i can now also reproduce the problem with a build of the 20th! Again the
same nodes leader and replica. The problem seems to be in the data we're
sending to Solr. I'll check it out an file an issue.
Cheers
-Original message-
> From:Mark Miller
> Sent: Wednesday 22nd January 2014 1
You need config-dir level schema.xml, and solrconfig.xml. For multiple
collections, you also need a top-level solr.xml. And unless the config
files a lot of references to other files, you need nothing else.
For examples, check the example directory in the distribution. Or have
a look at examples f
Thanks a lot,
those are great examples. I managed to get my cores working. What I
noticed so far is that the first (auto-created) core is symlinking files
to /etc/solr/... or to /var/lib/solr/...
I now am not sure where my self made-collections should be. Shall I
create folders in /usr/share/solr
I have a solr db containing 1 billion records that I'm trying to use in a
NoSQL fashion.
What I want to do is find the best matches using all search terms but
restrict the search space to the most unique terms
In this example I know that val2 and val4 is rare terms and val1 and val3
are more comm
You are not doing this on a download distribution, do you? You are
using Bitnami stack or something. That's why you are not seeing the
examples folder, etc.
I recommend step back, use downloaded distribution and do your
learning and setup using that. Then, go and see where your production
stack pu
Hi,
I am new to solr and successfully did a basic search. Now i am trying to do
classification of the search results using carrrot's support which comes
with solr 4.5.1. Would appreciate if someone tells me what is that i am
missing...may be a trivial issue??!!!
I am getting the below error..*jav
I installed solr via apt-get and followed the online tutorials that I
found to adjust the existing schema.xml and created dataconfig.xml the
way I needed them.
Was this the wrong approach? I don't know what Bitname stack is.
Am 23.01.2014 12:50, schrieb Alexandre Rafalovitch:
> You are not doi
Maybe you could move (field2:val2 or field4:val4) into a filter? E.g,
q=(field1:val1 OR field2:val2 OR field3:val3 OR
field4:val4)&fq=(field2:val2 OR field4:val4)
If I have this correctly, the fq part should be evaluated first, and may
even be found in the filter cache.
On Thu, Jan 23, 2014 at
Just download Solr stack from the download page and practice on that.
That has all the startup scripts and relative paths set up.
Regards,
Alex.
Personal website: http://www.outerthoughts.com/
LinkedIn: http://www.linkedin.com/in/alexandrerafalovitch
- Time is the quality of nature that keeps e
i checked solrconfig.xml in solr 4.3 and solr 1.4
In both i have checked
*Solr 1.4::*
*Solr 4.3::*
so how to handle dismax query type(qt) in solr 4.3
in solr 1.4.1 we have used qt=dismax
but solr 4.3 there is no such configuration.
so both give different result.
--
Regards,
Viresh Modi
Ignore or throw proper error message for bad delete containing bad composite ID
https://issues.apache.org/jira/browse/SOLR-5659
-Original message-
> From:Markus Jelsma
> Sent: Thursday 23rd January 2014 12:16
> To: solr-user@lucene.apache.org
> Subject: RE: AIOOBException on trunk sin
Hi Tariq,
I'm glad that helped you :-).
Thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solved-Storing-MYSQL-DATETIME-field-in-solr-as-String-tp4106836p4112979.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hello manju,
Thank you! It's really helpful for me.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solved-Storing-MYSQL-DATETIME-field-in-solr-as-String-tp4106836p4112977.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi Fatima,
Did you re-index after that chance? You need to re-index your documents.
Ahmet
On Thursday, January 23, 2014 7:31 AM, Fatima Issawi wrote:
Hi,
I have stored=true for my "content" field, but I get an error saying there is a
mismatch of settings on that field (I think) because of t
Hi Viresh,
defType=dismax should do the trick. By the way, example solrconfig.xml has an
example of edismax query parser usage.
On Thursday, January 23, 2014 2:34 PM, Viresh Modi
wrote:
i checked solrconfig.xml in solr 4.3 and solr 1.4
In both i have checked
*Solr 1.4::*
*Solr 4.3::*
s
On Wed, 2014-01-22 at 23:59 +0100, Bing Hua wrote:
> I am going to evaluate some Lucene/Solr capabilities on handling faceted
> queries, in particular, with a single facet field that contains large number
> (say up to 1 million) of distinct values. Does anyone have some experience
> on how lucene p
> Yes, that's correct.
>
> I also already tried the query you brought as example, but I have problems
> with the scoring.
> I'm using edismax as defType, but I'm not quite sure how to use it with a
> {!parent } query.
>
nesting query parsers is shown at
http://blog.griddynamics.com/2013/12/grandch
Hello,
I am finding that if any fields in a document returned by a Solr query
(*wt=json* to get a JSON response) contain backslash *'\'* characters, they
are not being escaped (to make then valid JSON).
e.g. Solr returns this: 'A quoted value *\"XXX\"*, plus these are
backslashes *\r\n* which sho
I'm not really aware enough of the Solr/Lucene internals to tell you
whether that's possible or not.
One thing occurred to me: What happens if you take optimize out of the
replication triggers in the replication handler?
optimize
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
On 12/11/2013 2:41 AM, Elodie Sannier wrote:
> collection fr_blue:
> - shard1 -> server-01 (replica1), server-01 (replica2)
> - shard2 -> server-02 (replica1), server-02 (replica2)
>
> collection fr_green:
> - shard1 -> server-01 (replica1), server-01 (replica2)
> - shard2 -> server-02 (replica1),
On 1/23/2014 4:57 AM, saurish wrote:
> I am new to solr and successfully did a basic search. Now i am trying to do
> classification of the search results using carrrot's support which comes
> with solr 4.5.1. Would appreciate if someone tells me what is that i am
> missing...may be a trivial issue?
On 1/23/2014 5:33 AM, Viresh Modi wrote:
> i checked solrconfig.xml in solr 4.3 and solr 1.4
> In both i have checked
>
> *Solr 1.4::*
>
>
> *Solr 4.3::*
>
>
>
> so how to handle dismax query type(qt) in solr 4.3
> in solr 1.4.1 we have used qt=dismax
> but solr 4.3 there is no such configura
: I am finding that if any fields in a document returned by a Solr query
: (*wt=json* to get a JSON response) contain backslash *'\'* characters, they
: are not being escaped (to make then valid JSON).
you're going to have to give us more concrete specifics on how you are
indexing your data, and
Thanks Frank, Mikhail & Robert for your input!
I'm looking into your ideas, and running a few test queries to see how it works
out. I have a feeling that it is more tricky that it sounds, for example, lets
say I have 3 docs in my index:
Doc1:
m1: a b c d
m2: a b c
m3: a b
m4: a
mAll: a b c d /
We have a 125GB shard that we are attempting to split, but each time we try to
do so, we eventually run out of memory (java.lang.OutOfMemoryError: GC overhead
limit exceeded). We have attempted it with the following heap sizes on the
shard leader: 4GB, 6GB, 12GB, and 24GB. Even if it does eventu
I'm super happy to announce that the call for submissions for Berlin
Buzzwords 2013 is open. For those who don't know the conference - in
my "absolutely objective opinion" the event is the most exciting
conference on storing, processing and searching large amounts of
digital data for engineers.
Th
This is a known issue. Solr 4.7 will bring some relief.
See https://issues.apache.org/jira/browse/SOLR-5214
On Thu, Jan 23, 2014 at 10:10 PM, Will Butler wrote:
> We have a 125GB shard that we are attempting to split, but each time we try
> to do so, we eventually run out of memory (java.lang.
Yeah, I think we removed support in the new solr.xml format. It should still
work with the old format.
If you have a good use case for it, I don’t know that we couldn’t add it back
with the new format.
- Mark
On Jan 23, 2014, 3:26:05 AM, Per Steffensen wrote: Hi
In Solr 4.0.0 I used to
Hi Chris,
thanks for the fast response. I'll try to be more specific about the
problem I am having.
# cat tmp.xml
9553522
quote: (") backslash: (\)
backslash-quote: (\")
newline: (
) backslash-n: (\n)
# curl 'http://localhost:8983/solr/collec
Try changing your solrconfig.xml. Look for the following:
text/plain; charset=UTF-8
See if you have any luck changing that to application/json. The driving reason
for this text/plain default is those newlines allow the browser to display a
more formatted response to the user.
T
For us we don't fully rely on cloud/collections api for creating and
deploying instances/etc.. we control this via an external mechanism so this
would allow me to have instances figure out what they should be based on an
external system.. we do this now but have to drop core.properties files all
ov
Thanks for suggestions. After reading that document I feel even more
confused though because I always thought that hard commits should be less
frequent that hard commits.
Is there any way to configure autoCommit, softCommit values on a per
request basis? The majority of the time we have small flow
Also, any suggestions on debugging? What should I look for and how? Thanks
On Thu, Jan 23, 2014 at 10:01 AM, Software Dev wrote:
> Thanks for suggestions. After reading that document I feel even more
> confused though because I always thought that hard commits should be less
> frequent that hard
: The problem I have is if I try to parse this response in *php *using
: *json_decode()* I get a syntax error because of the '*\n*' s that are in
: the response. I could escape the before doing the *json_decode() *or at the
: point of submitting to the index but this seems wrong...
I don't really
On 1/23/2014 11:01 AM, Software Dev wrote:
Is there any way to configure autoCommit, softCommit values on a per
request basis? The majority of the time we have small flow of updates
coming in and we would like to see them in ASAP. However we occasionally
need to do some bulk indexing (once a week
Hi,
Have you tried maxWriteMBPerSec?
http://search-lucene.com/?q=maxWriteMBPerSec&fc_project=Solr
Otis
--
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/
On Mon, Jan 20, 2014 at 4:00 PM, Software Dev wrote:
> We are testing our shi
Does maxWriteMBPerSec apply to NRTCachingDirectoryFactory? I only
see maxMergeSizeMB and maxCachedMB as configuration values.
On Thu, Jan 23, 2014 at 11:05 AM, Otis Gospodnetic <
otis.gospodne...@gmail.com> wrote:
> Hi,
>
> Have you tried maxWriteMBPerSec?
>
> http://search-lucene.com/?q=maxWrit
Hi,
I have configured single core master, slave nodes on 2 different machines.
The replication configuration is fine and it is working but, what I observed
is, on every change to master index full replication is being triggered on
slave.
I was planning to get only incremental indexing done on eve
Any update on this ?
I am also stuck with same problem, I want to install snapshot of master solr
server to my local environment. but i could't :(
All most spend 2 days to figure it out the way. Please help!!
--
View this message in context:
http://lucene.472066.n3.nabble.com/solrcloud-shar
54 matches
Mail list logo