The query:
http://localhost:8983/solr/select/?qt=tvrh&q=query:the&tv.fl=query&tv.all=true&f.id.tv.tf=true&facet.field=id&facet=true&facet.limit=-1&facet.mincount=1
be careful with facet.limit=-1, it'll pull everything matching the query.
Probably paging would make more sense in your case.
f.id.tv
ZK is not really designed for keeping large data files, from
http://zookeeper.apache.org/doc/current/zookeeperProgrammers.html#Data+Access:
> ZooKeeper was not designed to be a general database or large object
> store.If large data storage is needed, the usually pattern of dealing
> with suc
On Sat, May 5, 2012 at 8:39 AM, Jan Høydahl wrote:
> support for CouchDb, Voldemort or whatever.
Hmmm... Or Solr!
-Yonik
Hello,
Is it possible to concatenate a field via DIH?
For example for the id field, in order to make it unique
I want to add 'project' to the beginning of the id field.
So the field would look like 'project1234'
Is this possible?
Thanks
There might be a Solr way of accomplishing this, but I've always done
stuff like this in SQL (i.e. the CONCAT command). Doing it a Solr-native
way would probably be better in terms of bandwidth consumption, but just
giving you that option early in case there's not a better one.
Michael
On Sat, 20
Sounds like you need a "Template Transformer": "... it helps to concatenate
multiple values or add extra characters to field for injection."
...
See:
http://wiki.apache.org/solr/DataImportHandler#TemplateTransformer
Or did you have something different in mind?
-- Jack Krupansky
-Origi
Thanks guys. I had taken a quick look at
the Template Transformer and it looks it does
what I need it to dodidn't see the 'hello' part
when reviewing earlier.
On Sat, May 5, 2012 at 11:47 AM, Jack Krupansky wrote:
> Sounds like you need a "Template Transformer": "... it helps to
> concatenat
On 5/4/2012 8:10 PM, Lance Norskog wrote:
Optimize takes a 'maxSegments' option. This tells it to stop when
there are N segments instead of just one.
If you use a very high mergeFactor and then call optimize with a sane
number like 50, it only merges the little teeny segments.
When I optimize,
On May 5, 2012, at 8:39 AM, Jan Høydahl wrote:
> ZK is not really designed for keeping large data files,
> fromhttp://zookeeper.apache.org/doc/current/zookeeperProgrammers.html#Data+Access:
>> ZooKeeper was not designed to be a general database or large object
>> store.If large data storage
On May 5, 2012, at 2:37 AM, Trym R. Møller wrote:
> Hi
>
> Using Solr trunk with the replica feature, I see the below exception
> repeatedly in the Solr log.
> I have been looking into the code of RecoveryStrategy#commitOnLeader and read
> the code as follows:
> 1. sends a commit request (with
https://issues.apache.org/jira/browse/SOLR-3437
On May 5, 2012, at 12:46 PM, Mark Miller wrote:
>
> On May 5, 2012, at 2:37 AM, Trym R. Møller wrote:
>
>> Hi
>>
>> Using Solr trunk with the replica feature, I see the below exception
>> repeatedly in the Solr log.
>> I have been looking into t
The first thing I'd check is if, in the log, there is a replication happening
immediately prior to the error. I confess I'm not entirely up on the
version thing, but is it possible you're replicating an index that
is built with some other version of Solr?
That would at least explain your statement
Oh, isn't that easier! Need more coffee before suggesting things..
Thanks,
Erick
On Fri, May 4, 2012 at 8:16 PM, Lance Norskog wrote:
> If you are not using SolrCloud, splitting an index is simple:
> 1) copy the index
> 2) remove what you do not want via "delete-by-query"
> 3) Optimize!
>
> #2 b
We did it at my last job. Took a few days to split a 500mdoc index.
On Sat, May 5, 2012 at 9:55 AM, Erick Erickson wrote:
> Oh, isn't that easier! Need more coffee before suggesting things..
>
> Thanks,
> Erick
>
> On Fri, May 4, 2012 at 8:16 PM, Lance Norskog wrote:
>> If you are not using Solr
Which Similarity class do you use for the Lucene code? Solr has a custom one.
On Fri, May 4, 2012 at 6:30 AM, Benson Margulies wrote:
> So, I've got some code that stores the same documents in a Lucene
> 3.5.0 index and a Solr 3.5.0 instance. It's only five documents.
>
> For a particular field,
Use debugQuery=true to see exactly how the dismax parser sees this query.
Also, since this is a binary query, you can use filter queries
instead. Those use the Lucene syntax.
On Fri, May 4, 2012 at 8:14 AM, Erick Erickson wrote:
> Right, you need to do the explicit qualification of the date fiel
On Sat, May 5, 2012 at 7:59 PM, Lance Norskog wrote:
> Which Similarity class do you use for the Lucene code? Solr has a custom one.
I am embarassed to report that I also have a custom similarity that I
didn't know about, and once I configured that into Solr all was well.
>
> On Fri, May 4, 201
17 matches
Mail list logo