Hi All,
Sorry to ask, is it possible to create multiple collections in solr standalone
mode.I mean only one solr instance.I am able to create multiple collections in
solr cloud environment. But when creating in solr standalone, it is saying,
solr is not in cloud mode.Any suggestions great help.
Thanks, Chirs:
The schema is:
There is no default value for ptime. It is generated by users.
There are 4 shards in this solrcloud, and 2 nodes in each shard.
I was trying query with a function query({!boost b=dateDeboost(ptime)}
channelid:0082 && title:abc), which leads differents results fro
On Dec 3, 2013, at 8:41 PM, Upayavira wrote:
>
>
> On Tue, Dec 3, 2013, at 03:22 PM, Peyman Faratin wrote:
>> Hi
>>
>> Is it possible to delete and commit updates to an index inside a custom
>> SearchComponent? I know I can do it with solrj but due to several
>> business logic requirements I
By default it sorts by score. If the score is a consistent one, it will
order docs as they appear in the index, which effectively means an
undefined order.
For example a *:* query doesn't have terms that can be used to score, so
every doc will get a score if 1.
Upayavira
On Tue, Dec 3, 2013, at
On Tue, Dec 3, 2013, at 03:22 PM, Peyman Faratin wrote:
> Hi
>
> Is it possible to delete and commit updates to an index inside a custom
> SearchComponent? I know I can do it with solrj but due to several
> business logic requirements I need to build the logic inside the search
> component. I a
bq: Do you have any sense of what a good upper limit might be, or how we
might figure that out?
As always, "it depends" (tm). And the biggest thing it depends upon is the
number of simultaneous users you have and the size of their indexes. And
we've arrived at the black box of estimating size agai
: Yes, I am populating "ptime" using a default of "NOW".
:
: I only store the id, so I can't get ptime values. But from the perspective
: of business logic, ptime should not change.
if you are populating it using a *schema* default then the warning text i
pasted into my last message would defini
On 12/03/2013 01:55 AM, Dmitry Kan wrote:
Hello!
We have been experimenting with post filtering lately. Our setup is a
filter having long boolean query; drawing the example from the Dublin's
Stump the Chump:
fq=UserId:(user1 OR user2 OR...OR user1000)
The underlining issue impacting performanc
Sorry, I see that we are up to solr 4.6. I missed that.
On Tue, Dec 3, 2013 at 3:53 PM, hank williams wrote:
> Also, I see that the "lotsofcores" stuff is for solr 4.4 and above. What
> is the state of the 4.4 codebase? Could we start using it now? Is it safe?
>
>
> On Tue, Dec 3, 2013 at 3:33
Also, I see that the "lotsofcores" stuff is for solr 4.4 and above. What is
the state of the 4.4 codebase? Could we start using it now? Is it safe?
On Tue, Dec 3, 2013 at 3:33 PM, hank williams wrote:
>
>
>
> On Tue, Dec 3, 2013 at 3:20 PM, Erick Erickson wrote:
>
>> You probably want to look a
I know that I can use Atomic Updates for such cases but I want to
atomically update a field by a search result (I want to use that
functionality as like nested queries). Any other ideas are welcome.
2013/12/3 Furkan KAMACI
> How can I empty content of a field at Solr (I use Solr 4.5.1 as SolrCl
How can I empty content of a field at Solr (I use Solr 4.5.1 as SolrCloud)
via Solrj? I mean if I have that document at my index:
field1: "abc"
field2: "def"
field3: "ghi"
and if I want to empty the content of field2. I want to have:
field1: "abc"
field2: ""
field3: "ghi"
On Tue, Dec 3, 2013 at 3:20 PM, Erick Erickson wrote:
> You probably want to look at "transient cores", see:
> http://wiki.apache.org/solr/LotsOfCores
>
> But millions will be "interesting" for a single node, you must have some
> kind of partitioning in mind?
>
>
Wow. Thanks for that great link. Y
I've implemented what I want. I can add payload score into the document
score. I've modified ExtendedDismaxQParser and I can use all the abilities
of edismax at my case. I will explain what I did at my blog.
Thanks;
Furkan KAMACI
2013/12/1 Furkan KAMACI
> Hi;
>
> I use Solr 4.5.1 I have a case
You probably want to look at "transient cores", see:
http://wiki.apache.org/solr/LotsOfCores
But millions will be "interesting" for a single node, you must have some
kind of partitioning in mind?
Best,
Erick
On Tue, Dec 3, 2013 at 2:38 PM, hank williams wrote:
> We are building a system wher
On Tue, Dec 3, 2013 at 4:45 AM, Dmitry Kan wrote:
> ok, we were able to confirm the behavior regarding not caching the filter
> query. It works as expected. It does not cache with {!cache=false}.
>
> We are still looking into clarifying the cost assignment: i.e. whether it
> works as expected for
We are building a system where there is a core for every user. There will
be many tens or perhaps ultimately hundreds of thousands or millions of
users. We do not need each of those users to have “warm” data in memory. In
fact doing so would consume lots of memory unnecessarily, for users that
mig
Try adding &debug=all and you'll see exactly how docs
are scored. Also, it'll show you exactly how your query is
parsed. Paste that if it's confused, it'll help figure out
what's going wrong.
On Tue, Dec 3, 2013 at 1:37 PM, Andreas Owen wrote:
> So isn't it sorted automaticly by relevance (boos
Yep, sorry, it doesn't work for file-based dictionaries:
> In particular, you still need to index the dictionary file once by issuing a
> search with &spellcheck.build=true on the end of the URL; if you system
> doesn't update that dictionary file, then this only needs to be done once.
> This m
So isn't it sorted automaticly by relevance (boost value)? If not do should
i set it in solrconfig?
-Original Message-
From: Jonathan Rochkind [mailto:rochk...@jhu.edu]
Sent: Dienstag, 3. Dezember 2013 19:07
To: solr-user@lucene.apache.org
Subject: Re: json update moves doc to end
What o
AFAIK If you don't supply or configure a sort parameter, SOLR is sorting
by "score desc".
In that case, you may want to understand (at least view) how each
document score is calculated: you can run the query with queryDebug set
and see the whole explain
This great tool helped me a lot: _http:/
What order, the order if you supply no explicit sort at all?
Solr does not make any guarantees about what order documents will come
back in if you do not ask for a sort.
In general in Solr/lucene, the only way to update a document is to
re-add it as a new document, so that's probably what's g
When I search for agenda I get a lot of hits. Now if I update the 2.
Result by json-update the doc is moved to the end of the index when I search
for it again. The field I change is editorschoice and it never contains
the search term agenda so I dont see why it changes the order. Why does
it
Yes, I have that, but it doesn't help. It seems Solr still needs the query
with the "spellcheck.build" parameter to build the spellchecker index.
2013/12/3 Kydryavtsev Andrey
> Did you try to add
> true
> parameter to your slave's spellcheck configuration?
>
> 03.12.2013, 12:04, "Mirko" :
>
Did you try to add
true
parameter to your slave's spellcheck configuration?
03.12.2013, 12:04, "Mirko" :
> Hi all,
> We use a Solr SpellcheckComponent with a file-based dictionary. We run a
> master and some replica slave servers. To update the dictionary, we copy
> the dictionary txt file to t
This occurs only on production environment so I can't profile it :-) Any
clues?
DirectUpdateHandler2 config:
15000
false
${solr.ulog.dir:}
--
View this message in context:
http://lucene.472066.n3.nabble.com/Constantly-increasing-time-of-full-data-import-tp4103873p4104722.htm
Hi
Is it possible to delete and commit updates to an index inside a custom
SearchComponent? I know I can do it with solrj but due to several business
logic requirements I need to build the logic inside the search component. I am
using SOLR 4.5.0.
thank you
No, I am running on the example jetty. I am re-running the import and
haven't hit the problem yet. Still running.
On Tue, Dec 3, 2013 at 5:45 PM, Eric Bus wrote:
> Are you currently running SOLR under Tomcat or standalone with Jetty? I
> switched from Tomcat to Jetty and the problems went away.
I don't recall hearing any discussion of such a switch. In fact, Solr now
has its own copy of the classic Lucene query parser since Solr needed some
features that the Lucene guys did not find acceptable.
That said, if you have a proposal to dramatically upgrade the base Solr
query parser, as w
Are you currently running SOLR under Tomcat or standalone with Jetty? I
switched from Tomcat to Jetty and the problems went away.
- Eric
-Oorspronkelijk bericht-
Van: Shalin Shekhar Mangar [mailto:shalinman...@gmail.com]
Verzonden: dinsdag 3 december 2013 12:44
Aan: solr-user@lucene.ap
I just ran into this issue on solr 4.6 on an EC2 machine while
indexing wikipedia dump with DIH. I'm trying to isolate exceptions
before the SolrCoreState already closed exception.
On Sun, Nov 10, 2013 at 11:58 PM, Mark Miller wrote:
> Can you isolate any exceptions that happened just before that
ok, we were able to confirm the behavior regarding not caching the filter
query. It works as expected. It does not cache with {!cache=false}.
We are still looking into clarifying the cost assignment: i.e. whether it
works as expected for long boolean filter queries.
On Tue, Dec 3, 2013 at 8:55 A
It's just a text type. So, just declare another field and instead of
text_general or text_en, use text_ar. Then use copyField from source text
field to it.
Go through the tutorial, if you haven't yet. It explains some of the things.
Regards,
Alex.
Personal website: http://www.outerthoughts.co
Hi,
Thanks for ur post,
I donot know how to use "text_ar" fieldtype for Arabic language. What are
the configurations need to add in schema.xml file ? Please guide me.
AnilJayanti
--
View this message in context:
http://lucene.472066.n3.nabble.com/Indexing-Multiple-Languages-with-solr-Arabic
Hi all,
We use a Solr SpellcheckComponent with a file-based dictionary. We run a
master and some replica slave servers. To update the dictionary, we copy
the dictionary txt file to the master, from where it is automatically
replicated to all slaves. However, it seems we need to run the
"spellcheck.
35 matches
Mail list logo