On 11/30/2012 10:11 PM, Joe Zhang wrote:
May I ask: how to set up multiple indexes, and specify which index to send
the docs to at indexing time, and later on, how to specify which index to
work with?
A related question: what is the storage location and structure of solr
indexes?
When you index
Multiple indexes can be setup using the multi core feature of Solr.
Below are the steps:
1. Add the core name and storage location of the core to
the $SOLR_HOME/solr.xml file.
**
**
2. Create the core-directories specified and following sub-directories in
it:
- conf: Contains th
Try separating multi word synonyms with a null byte
simple\0syrup,sugar\0syrup,stock\0syrup
see https://issues.apache.org/jira/browse/LUCENE-4499 for details
roman
On Sun, Feb 5, 2012 at 10:31 PM, Zac Smith wrote:
> Thanks for your response. When I don't include the KeywordTokenizerFactory
>
: Background: Basically, I have added a new feature to Solr after I got the
: source code. Similar to the we get "score" in the resultset, I am now able
: to get position (or ranking) information of each document in the list. i.e
: if there are 5 documents in the result set, each of them has its p
Sorry, correction.
${solr.core.instanceDir} is working in a sense. It is replaced by the
core name, rather than a directory path.
In an earlier startup time Solr prints out:
INFO: Creating SolrCore 'collection1' using instanceDir: solr/collection1
But judging from the error message I get, ${sol
On Nov 30, 2012, at 5:04 PM, Shawn Heisey wrote:
> The other exceptions in the log look more serious.
My guess would be that the sizeOf exception is leading to the others.
- Mark
I tried to use ${solr.core.instanceDir} in schema.xml with Solr 4.0,
where every deployment is multi-core, and it didn't work.
It must be that the description about pre-defined properties in
CoreAdmin wiki page is wrong, or it only works in solrconfig.xml, perhaps?
On 11/28/12 5:17 PM, T. Kuro
On 11/30/2012 2:24 PM, Mark Miller wrote:
You are using a local filesystem and not like NFS?
If you are not replicating, I'm surprised this would happen and doubt it's the
bug mentioned in the other reply.
It could just be a bug we have to defend against - if a file is not there
because the i
I don’t have a simple answer for your stated issue, but maybe part of that is
because I’m not so sure what the exact problem/goal is. I mean, what’s so
special about phrase queries for your app than they need distinct processing
from individual terms?
And, ultimately, what goal are you trying t
On 11/30/2012 2:24 PM, Markus Jelsma wrote:
Hi, try updating your check out, i think that's fixed now.
https://issues.apache.org/jira/browse/SOLR-4117
Thank you, that's the most common problem. I'll let you know how it
turns out.
There are still other problems in the log. Anyone have any i
You are using a local filesystem and not like NFS?
If you are not replicating, I'm surprised this would happen and doubt it's the
bug mentioned in the other reply.
It could just be a bug we have to defend against - if a file is not there
because the index is changing as we are counting, we shou
Hey, great advice Amit, Jack, and Chris. It's been a while since I got such
a nice array of options! My response... yes, Amit, I thought of your way
before posting... I was just thinking, eh, there must be a way in SOLR,
since it was so easy to do the facets. So I wanted an alternative first
bef
Hi, try updating your check out, i think that's fixed now.
https://issues.apache.org/jira/browse/SOLR-4117
-Original message-
> From:Shawn Heisey
> Sent: Fri 30-Nov-2012 22:21
> To: solr-user@lucene.apache.org
> Subject: Exceptions in branch_4x log
>
> This is branch_4x, checked out 20
This is branch_4x, checked out 2012-11-28. Here is my solr log, created
by log4j at WARN level:
http://dl.dropbox.com/u/97770508/solr-2012-11-30.log
There are a bunch of unusual exceptions in here. Most of them appear to
be related to getting information from the mbeans handler, specifically
: I use it like this:
: SolrParams params = req.getParams();
: String q = params.get(CommonParams.Q).trim();
:
: The exception is from the second line if "q" is empty.
: I can see "q.alt=*:*" in my defaults within params.
:
: So why is it not picking up "q.alt" if "q" is empty?
Youre talking ab
: What's the performance impact of doing this?
the function approach should have slower query times compared to the "new
field containing day" approach because it has to do the computation for
every doc at query time, but it's less flexible because you have to know
in advance you want to use i
Apologies for the cross-post.
Blacklight 4.0.0 was just released yesterday evening. One of the most notable
changes in this release is a switch to using Twitter Bootstrap for our UI
component. We have taken a fairly generic approach which will allow
implementers to take full advantage of the
: query.setParam(GroupParams.GROUP_MAIN, true);
...
: GroupResponse groupResponse = response.getGroupResponse(); // null
:
: Search result is ok, QueryResponse contains docs I searched for. But group
: response is always null. Did I miss something, some magic parameter for
Here is how i have previously used grouping. Note i am using Solr 3.5:
SolrQuery query = new SolrQuery("");
query.setRows(GROUPING_LIMIT);
query.setParam("group", Boolean.TRUE);
query.setParam("group.field", "GROUP_FIELD");
This seems to work for me.
On Fri, Nov 30, 2012 at 1:17 PM, Roman Slav
On Fri, Nov 30, 2012 at 12:13 PM, Roman Chyla wrote:
>
> The code here:
>
> https://github.com/romanchyla/montysolr/blob/solr-trunk/contrib/adsabs/src/test/org/adsabs/lucene/BenchmarkAuthorSearch.java
>
> The benchmark should probably not be called 'benchmark', do you think it
> may be too simpli
This issue adds the SpanFirstQuery to edismax.
https://issues.apache.org/jira/browse/SOLR-3925
It unfortuntately cannot produce progressively higher boosts if the term is
closer to the beginning.
-Original message-
> From:Jack Krupansky
> Sent: Fri 30-Nov-2012 18:54
> To: solr-user@
Wow, an XPA user!
The distributed search merging and global IDF calculation that we used in
Ultraseek XPA is described here:
http://wunderwood.org/most_casual_observer/2007/04/progressive_reranking.html
If you have per-term document frequencies and numdocs for each shard, you can
calculate glo
Two choices:
1. You need the Lucene SpanFirstQuery, but the normal Solr query parsers
don't support it, so you need to roll your own.
2. Do a custom update processor that at index time inserts a special start
marker like "aaafirstaaa" at the beginning of each field that needs this
feature. The
found also some 1M test
258033ms. Buiding index of 100 docs
29703ms. Verifying data integrity with 100 docs
1821ms. Preparing 1 random queries
2867284ms. Regex queries
18772ms. Regexp queries (new style)
29257ms. Wildcard queries
4920ms. Boolean queries
Totals: [1749708, 1744494, 1
Hi,
I need to boost the document containing the search keyword in the first
position of the indexed data, ex:
If I have 3 data indexed as below,
Account number
Data account and account number
Information number account data account
Account indicator
when users searches for keyword account, I wa
What mime type you get for binary files? Maybe server is misconfigured for
that extension and sends them as text. Then they could be the markers.
Do they look like markers?
Regards,
Alex
On 30 Nov 2012 04:06, "Eva Lacy" wrote:
> Doesn't make much sense if they are in binary files as well.
>
We are running 1.6 update 37. That was released on the same day as your
version, so it should have the same bug fixes. We use these options in
production, it is very stable:
export CATALINA_OPTS="$CATALINA_OPTS -d64"
export CATALINA_OPTS="$CATALINA_OPTS -Xms4096m -Xmx6144m"
export CATALINA_OPTS=
We are currently operating at reduced load which is why the ParNew
collections are not a problem. I don't know how long they were taking
before though. Thanks for the warning about index formats.
Our JVM is:
Java(TM) SE Runtime Environment (build 1.7.0_09-b05)
Java HotSpot(TM) 64-Bit Server VM (b
On Nov 30, 2012, at 11:01 AM, yayati wrote:
> We have created some custom search component, where this error occur in
> inform method at line
> .getResourceLoader().getConfigDir()));
Does your custom component try and get the config dir? What for?
- Mark
Hi Mark,
Please find detail stacktrace :
2012-11-30 19:32:58,260 [pool-2-thread-1] ERROR
apache.solr.core.CoreContainer - null:org.apache.solr.common.SolrException:
ZkSolrResourceLoader does not support getConfigDir() - likely, what you are
trying to do is not supported in ZooKeeper mode
Thanks for all the detailed info!
Yes, that is confusing. One of the sore points we have while supporting both
std Solr and SolrCloud mode.
In SolrCloud, every node is a Master when thinking about std Solr replication.
However, as you see on the cloud page, only one of them is a *leader*. A lea
Hi All,
I have my field definition in schema.xml like below
I need to create separate record in solr for each parent child
relationship... such that if child is same across different parent that it
gets stored only once.
For e.g.
---_Record 1
ABC
EMP001
DOC001
My Parent Doc
-
Dear list,
after going from 3.6 to 4.0 I see exceptions in my logs.
It turned out that somehow the "q"-parameter was empty.
With 3.6 the "q.alt" in the solrconfig.xml worked as fallback but now with 4.0
I get exceptions.
I use it like this:
SolrParams params = req.getParams();
String q = params.
A POC will tell you - it is 90% driven by your particular environment, your
particular schema, your particular data, and your particular queries (e.g.,
how many documents they match, how many days they match.)
But please do share with us your results after conducting your POC - which
is an abs
I really don't remember. Yes, you don't want it to start with a /, yes it's
part of the node name, but the node name should have all / turned into _.
I'd simply try it - enforce no starting / instead, turn / into _ for the name
node…see what tests pass, do some manual testing…
That's all I've
tag me baffled. But these are copied around all the time so I'm guessing an
interaction between your servlet container and your request, which is like
saying "it must be magic". You can tell I'm into places where I'm
clueless
Sorry I can't be more help
Erick
On Fri, Nov 30, 2012 at 4:06 AM,
Just glad it's resolved
Erick
On Thu, Nov 29, 2012 at 7:46 PM, Buttler, David wrote:
> Sorry, yes, I had been using the BETA version. I have deleted all of
> that, replaced the jars with the released versions (reduced my core count),
> and now I have consistent results.
> I guess I missed
Need more information about your setup and config.
Longer stack traces would be helpful as well.
- Mark
On Nov 30, 2012, at 12:35 AM, yayati wrote:
> Hi All,
>
> I also got similar error while moving my solr 3.6 based application on solr
> cloud. While setting solrcloud i got this error :
> S
On Nov 30, 2012, at 5:08 AM, Arkadi Colson wrote:
> Hi
>
> I've setup an simple 2 machine cloud with 1 shard, one replicator and 2
> collections.Everything went fine. However when I look at the interface:
> http://localhost:8983/solr/#/coll1/replication is reporting the both machines
> are m
right, so here's what I'd check for.
Your logs should show a replication pretty coincident with the spike and
that should be in the log. Note: the replication should complete just
before the spike.
Or you can just turn replication off and fire it manually to try to force
the situation at will, se
Hi,
Thank you for your help. The issue is now resolved after using analysis
tool as suggested by Jack and Chris. We used the following filters in the
end for this field:
WordDelimiterFilterFactory does the tric
Hi,
we are using the edismax query parser and execute queries on specific fields by
using the qf option. Like others, we are facing the problem we do not want
explicit phrase queries to be performed on some of the qf fields and also
require additional search fields for those kind of queries.
We
You might look into joins. Be aware that the sweet spot for joins is when
the field you're joining on doesn't have a huge number of unique values per
document.
But that's about all I can think of offhand
Best
Erick
On Thu, Nov 29, 2012 at 1:29 AM, ninaddesai82 wrote:
> Thanks Erick for replyi
Hi guys,
I have problem with grouping in Solr 4.0 using Solrj api. I need this:
search some documents limited with solr query, group them by one field
and return total count of groups.
There is param 'group.ngroups' for adding groups count into group
response. Sounds easy, so I wrote something
Hi All,
I'm a bit new to the whole solr world and am having a slight problem with
replication. I'm attempting to configure a master/slave scenario with bulk
updates happening periodically. I'd like to insert a large batch of docs to
the master, then invoke an optimize and have it only then repli
Yes we do use paginating, we show 10 or 15 results but the user has an
option to select them all(count of the queryresult which is returned by
solr). When he uses this functionality we need all the selected doc id's(or
orginal primary keys from the database) in the asp.net application as fast
as po
Shawn Heisey wrote:
[..]
For best results, you'll want to ensure that Solr4 is working completely
from scratch, that it has never seen a 3.3 index, so that it will use
its own native format.
That's why I did in the second run. Thanks for clarifying that this is
in fact better. :)
It may be a
Doesn't make much sense if they are in binary files as well.
On Thu, Nov 29, 2012 at 10:16 PM, Lance Norskog wrote:
> Maybe these are text encoding markers?
>
> - Original Message -
> | From: "Eva Lacy"
> | To: solr-user@lucene.apache.org
> | Sent: Thursday, November 29, 2012 3:53:07 A
48 matches
Mail list logo