Well, starting in 4x you can sort by function (see
http://wiki.apache.org/solr/FunctionQuery). So if your encoding is such
that you can write a function that maps the order you can probably realize
this. You could probably combine that with external file fields (or maybe
even an sidecar index) that
Nope. Problem is that the tie breaker is the internal Lucene Doc id. Which
a long time ago was invariant, that is a document indexed later always had
a larger internal doc id. But the various merge policies can combine
segments such that the internal IDs can change relative to one another
So
Not that I know of...
Best
Erick
On Thu, Jan 31, 2013 at 7:09 AM, Marcos Mendez wrote:
> Is there a way to do an atomic update (inc by 1) and retrieve the updated
> value in one operation?
Major changes (i.e. 3 -> 4) have some such differences, but usually it's
for the better. But it is sometimes disconcerting! Also, sometimes there
are different defaults and you can get the old behavior back...
Although you haven't quite said what was different. I know the stock
typedefs have chang
Take a look at &debug=all output, because one problem here is
that text:a b (or even +text:a +b) parses as
+text:a +defaultfield:b
And you haven't said whether you're using edismax or not.
So please do two things
1> explain why the proposed solution doesn't solve your problem
2> provide the result
About patches.
1> sure, you can open a JIRA yourself, add patches
to it, and comment etc. You have to create a signon...
2> right, once a JIRA is opened much of the discussion
moves to the JIRA, that way there's a record of what
went into the code. Any side-channel discussions should
have their co
On Sun, Feb 3, 2013 at 7:46 AM, Erick Erickson wrote:
> Nope. Problem is that the tie breaker is the internal Lucene Doc id. Which
> a long time ago was invariant, that is a document indexed later always had
> a larger internal doc id. But the various merge policies can combine
> segments such th
Hi,
I'm new to Solr and have two questions.
#1)
I was wondering if I wanted to index different object types. How I go about
configuring them in schema file.
Here is the hypothetical simple scenario. My site has Users and Posts. I
want to be able to index User objects, and Posts, but make it
What is the inverse I'd use to re-create/load a core on another machine but
make sure it's also "known" to SolrCloud/as a shard?
On Sat, Feb 2, 2013 at 4:01 PM, Joseph Dale wrote:
>
> To be more clear lets say bob it the leader of core 1. On bob do a
> /admin/cores?action=unload&name=core1. Thi
With solrclound all cores are collections. The collections API it just a
wrapper to call the core api a million times with one command.
to /solr/admin/cores?action=CREATE&name=core1&collection=core1&shard=1
Basically your "creating" the shard again, after leader props have gone out.
Solr will c
I have a scenario in which I need to post 500,000 documents to Solr as a
test. I have these documents in XML files already formatted in Solr's
xml format.
Posting to Solr using post.jar it takes 1m55s. With a bit of bash
jiggery-pokery, I was able to get this down to 1m08s by running four
concurre
What times do you get with DIH? It has native support for that format too.
On 3 Feb 2013 11:20, "Upayavira" wrote:
> I have a scenario in which I need to post 500,000 documents to Solr as a
> test. I have these documents in XML files already formatted in Solr's
> xml format.
>
> Posting to Solr u
I haven't tried DIH, although if it does support multithreading, I might
be inclined to.
Upayavira
On Sun, Feb 3, 2013, at 05:17 PM, Alexandre Rafalovitch wrote:
> What times do you get with DIH? It has native support for that format
> too.
> On 3 Feb 2013 11:20, "Upayavira" wrote:
>
> > I have
Solr doesn't explicitly support such relationships as its document model
is flat.
It would seem reasonable to index your users and posts into the same
index, but making sure your field names don't overlap. That is, your
users might have a field 'name', and your posts have a field 'owner' and
anoth
Hi Amir,
One thing you may want to look at is the Apache OODT File Manager here. If
you can model your content objects as Apache OODT "product types" you can
then use the parent/child relationship model there to achieve what you are
after. Then if you use the SolrDumperTool from the Apache OODT fi
Many thanks Upayavira and Chris
Makes a lot of sense.
Thanks very much guys.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Beginner-Question-about-Types-and-Parent-Child-Definitions-in-Solr-Schema-xml-tp4038209p4038233.html
Sent from the Solr - User mailing list archiv
Hi,
I think the issue was not in zk client timeout, but POST request size. When
I increased the value for Request.maxFormContentSize in jetty.xml I don't
see this issue any more.
Regards.
On 3 February 2013 01:56, Mark Miller wrote:
> Do you see anything about session expiration in the logs? T
What led you to trying that? I'm not connecting the dots in my head - the
exception and the solution.
- Mark
On Feb 3, 2013, at 2:48 PM, Marcin Rzewucki wrote:
> Hi,
>
> I think the issue was not in zk client timeout, but POST request size. When
> I increased the value for Request.maxFormCont
I'm loading in batches. 10 threads are reading json files and load to Solr
by sending POST request (from couple of dozens to couple of hundreds docs
in 1 request). I had 1MB post request size, but when I changed it to 10MB
errors disappeared. I guess this could be the reason.
Regards.
On 3 Februa
On 2/3/2013 1:07 PM, Marcin Rzewucki wrote:
I'm loading in batches. 10 threads are reading json files and load to Solr
by sending POST request (from couple of dozens to couple of hundreds docs
in 1 request). I had 1MB post request size, but when I changed it to 10MB
errors disappeared. I guess th
Hi,
I set this:
org.eclipse.jetty.server.Request.maxFormContentSize
10485760
multipartUploadLimitInKB is set to 2MB in my case. The funny is that I did
change only in jetty.xml
I'll change this value back to 1MB and repeat test to check if this is the
reason.
Regards.
On 3 F
On 2/3/2013 1:18 PM, Isaac Hebsh wrote:
Hi.
I have a SolrCloud cluster, which contains some servers. each server runs
multiple cores.
I want to distribute the requests over the running cores on each server,
without knowing the cores names in the client.
Question 1: Do I have any reason to do t
Thanks Shawn for your quick answer.
When using collection name, Solr will choose the leader, when available in
the current server (see getCoreByCollection in SolrDispatchFilter). It is
clear that it's useful when indexing. But queries should run on replicas
too, don't they? Moreover, the core sele
On 2/3/2013 3:24 PM, Isaac Hebsh wrote:
Thanks Shawn for your quick answer.
When using collection name, Solr will choose the leader, when available in
the current server (see getCoreByCollection in SolrDispatchFilter). It is
clear that it's useful when indexing. But queries should run on replica
It worked thanks alot Arcadius..
On Fri, Feb 1, 2013 at 7:56 PM, Arcadius Ahouansou wrote:
> Hi Rohan.
> *
> *
> Solr 4.1 uses Jetty 8.
>
> You need to put your JDBC driver under SOLR_HOME/lib/ext
>
> SOLR_HOME/lib/ being where all jetty *jar sit.
> You may need to create "ext" if it does not exi
Hi,
I think I did 2 changes at the same time: increased maxFormContentSize and
zkClientTimeout (from 15s to 30s). When I restarted cluster there were no
"ClusterState issues", but most probably due to increased zkClientTimeout
and not maxFormContentSize. I did one more test with default value
for m
But if i use my system as solr server it is working fine. The problem comes
only if i use another machine as solr server. But both machines have the
same schema and solrconfig files.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Issue-with-spellcheck-and-autosuggest-tp4036
Am 02.02.2013 03:48, schrieb Yonik Seeley:
> On Fri, Feb 1, 2013 at 4:13 AM, Bernd Fehling
> wrote:
>> A question to the experts,
>>
>> why is the replicated index copied from its temporary location
>> (index.x)
>> to the real index directory and NOT moved?
>
> The intent is certainly t
hi arcadius
can you also help me with partial document update...I have followed what is
written in this blog but its giving me error
http://solr.pl/en/2012/07/09/solr-4-0-partial-documents-update/
error im getting after this command :
C:\Users\rohan>curl localhost:8983/solr/update?commit=true
29 matches
Mail list logo