Hi Shawn,
The transaction log is only being used to support near-real-time search at the
moment, I think, so it sounds like it's surplus to requirements for your
use-case. I'd just turn it off.
Alan Woodward
www.romseysoftware.co.uk
On 15 Oct 2012, at 07:04, Shawn Heisey wrote:
> On 10/14/20
The extra codecs are supplied in a separate jar file now
(lucene-codecs-4.0.0.jar) - I guess this isn't being packaged into solr.war by
default? You should be able to download it here:
http://search.maven.org/remotecontent?filepath=org/apache/lucene/lucene-codecs/4.0.0/lucene-codecs-4.0.0-javad
Hi Rogerio,
i can imagine what it is. Tomcat extract the war-files in
/var/lib/tomcatXX/webapps.
If you already run an older Solr-Version on your server, the old
extracted Solr-war could still be there (keyword: tomcat cache).
Delete the /var/lib/tomcatXX/webapps/solr - folder and restart tomcat,
w
Can you share more please?
i do not know how exactly is formula for calculating ratio.
if you have something like: (term count in shard 1 + term count in shard
2) / num documents in all shards
then just use shard size as weight while computing this:
(term count in shard 1 * shard1 keyspace
Hi,
I am trying to import a mysql database : sampledatabase.
When I run the full import command
http://localhost:8983/solr/db/dataimport?command=full-import in the
browser, I get the following error in the terminal after about 1 minute.
Oct 16, 2012 3:49:20 PM org.apache.solr.core.SolrCore e
Hi,
On 15 Oct 2012, at 11:02, Romita Saha wrote:
> My dataconfig.xml file looks like the following :
>
> -
> url="jdbc:mysql://localhost:8983/home/demo/snp-comm/sampledatabase" />
> -
> -
>
>
>
>
The error information means that the connection wasn't accepted by the server.
I
Hi Dave,
Thank you for your prompt reply. The name of the database am using is
sampledatabase.sql and it is located in home/demo/snp-comm folder. Hence I
have specified the url as
url="jdbc:mysql://localhost:8983/home/demo/snp-comm/sampledatabase.sql" />
Could you please specify which
Hi Romita,
On 15 Oct 2012, at 11:46, Romita Saha wrote:
> Thank you for your prompt reply. The name of the database am using is
> sampledatabase.sql and it is located in home/demo/snp-comm folder. Hence I
> have specified the url as
>
> url="jdbc:mysql://localhost:8983/home/demo/snp-comm/s
Hi Vadim,
In fact tomcat is running in another non standard path, there's no old
version deployed on tomcat, I double checked it.
Let me try in another environment.
-Mensagem Original-
From: Vadim Kisselmann
Sent: Monday, October 15, 2012 6:01 AM
To: solr-user@lucene.apache.org ; ro
Hi,
I have many documents indexed into Solr. I am now facing a requirement
where the search results should be returned sorted based on their scores.
In the *case of non-exact matches*, if there is a tie, another level of
sorting is to be applied on a field called priority.
I am using solr with dj
sort=score desc, priority desc
Won't that do it?
Upayavira
On Mon, Oct 15, 2012, at 09:14 AM, Sandip Agarwal wrote:
> Hi,
>
> I have many documents indexed into Solr. I am now facing a requirement
> where the search results should be returned sorted based on their scores.
> In the *case of non-
Following entity defintion:
Hi,
while starting solr-4.0.0 I get the following exception:
SEVERE: null:java.lang.IllegalAccessError:
class org.apache.lucene.codecs.lucene3x.PreFlexRWPostingsFormat cannot access
its superclass org.apache.lucene.codecs.lucene3x.Lucene3xPostingsFormat
Very strange, because some lines earlier i
I have no idea how you managed to get so many files in
your index directory, but that's definitely weird. How it
relates to your "file not found", I'm not quite sure, but it
could be something as simple as you've run out of file
handles.
So you could try upping the number of
file handles as a _tem
Can we limit copyfield source condition?
for example if we want to make lookup in source="product_name" and
dest="some_dest"
so our syntax would become
How about copying only those product_names having status=0 AND attribute1=1
AND attribute2=0.
assume status,attribute1,attribute2 and product_name
Hello,
I think you don't have that much tuning possiblities using only the
schema.xml file.
You will have to write some custom Java code (subclasses of
UpdateRequestProcessor and UpdateRequestProcessorFactory), build a Java jar
containing your custom code, put that jar in one of the path declared
Hi there!
I cannot read timestamp data from QueryResponse (want to cast result to a
POJO). If Im using SolrDocumentList there are no errors.
db-data-config.xml:
Here is what I posted on StackOverflow:
The boost in edismax can be used for this. It is applied to all scores, but if
it is a small value, it will only make a difference for ties or near-ties.
Significant differences in the base score will not be reordered.
See:
http://wiki.apache.org/solr/Ex
Hi, Erick
Thanks for your advice. My mergeFactor is set to 10, so it's impossible
have so many segments, specially some .fdx, .fdt file is just empty. And
sometime indexing is working fine, ended with 200+ files in data dir. My
deployment is having two core and two shard for every core, using
autoc
On 10/15/2012 2:47 AM, Alan Woodward wrote:
The extra codecs are supplied in a separate jar file now
(lucene-codecs-4.0.0.jar) - I guess this isn't being packaged into solr.war by
default? You should be able to download it here:
http://search.maven.org/remotecontent?filepath=org/apache/lucene
Hi,
Thanks for the suggestions. Didn't work for me :(
I'm calling
which depends on org.eclipse.jetty:jetty-server
which depends on org.eclipse.jetty.orbit:jettty-servlet
I think I'm experiencing https://jira.codehaus.org/browse/JETTY-1493.
The pom file for
http://repo1.maven.org/maven2/org/e
Thank you very much Otis, regular old Solr
distribute search was the piece I was missing. Now it's hands on time!
--
Rui
On 10/14/12 12:19 PM, Jack Krupansky wrote:
There's a miscommunication here somewhere. Is Solr 4.0 still passing
"*:*" to the analyzer? Show us the parsed query for "*:*", as well as
the debugQuery "explain" for the score.
I'm not quite sure what you mean by the parsed query for "*:*".
This fak
Apologies, there was a typo in my last message.
org.eclipse.jetty.orbit:jettty-servlet should have been
org.eclipse.jetty.orbit:javax.servlet
On Mon, Oct 15, 2012 at 11:19 AM, P Williams wrote:
> Hi,
>
> Thanks for the suggestions. Didn't work for me :(
>
> I'm calling
> conf="test->default
And you're absolutely certain you see "*:*" being passed to your analyzer in
the final release of Solr 4.0???
-- Jack Krupansky
-Original Message-
From: T. Kuro Kurosaka
Sent: Monday, October 15, 2012 1:28 PM
To: solr-user@lucene.apache.org
Subject: Re: Any filter to map mutiple token
>
> This should not be required, because I am building from source. I compiled
> Solr from lucene-solr source checked out from branch_4x. I grepped the
> entire tree for lucene-codec and found nothing.
>
> It turns out that running 'ant generate-maven-artifacts' created the jar file
> -- al
Hi TJ.
If you use a circle query shape, it's O(N), plus it puts all the points in
memory. If you use a rectangle via bbox then I'm not sure but its fast enough
that I wouldn't worry about it. If my understanding is correct on Lucene
TrieRange fields, it's O(Log(N)). If you want fast filterin
On 10/15/12 10:35 AM, Jack Krupansky wrote:
And you're absolutely certain you see "*:*" being passed to your
analyzer in the final release of Solr 4.0???
I don't have a direct evidence. This is the only theory I have that
explains why changing FieldType causes the sub-optimal scores.
If you kno
Hi Folks,
I have been looking at solrcloud to solve some of our problems with solr in
a distributed environment. As you know, in such an environment, every
instance of solr or zookeeper can come into existence and go out of
existence - at any time. So what happens if instances of ZK disappear and
See discussion on https://issues.apache.org/jira/browse/SOLR-3843, this was
apparently intentional.
That also links to the following:
http://wiki.apache.org/solr/SolrConfigXml#codecFactory, which suggests you need
to use solr.SchemaCodecFactory for per-field codecs - this might solve your
post
Hi Jorge,
Please see the notes on Polygons:
http://wiki.apache.org/solr/SolrAdaptersForLuceneSpatial4#JTS_.2BAC8_WKT_.2BAC8_Polygon_notes
This bullet in particular is relevant:
• The standard way to specify a rectangle in WKT is a Polygon -- WKT
doesn't have a rectangle shape. If you wan
Hi Alan,
I don't have any direct feedback... but I know there is an issue that
you may want to be aware of (and incorporate?) -
https://issues.apache.org/jira/browse/SOLR-3514
Otis
--
Search Analytics - http://sematext.com/search-analytics/index.html
Performance Monitoring - http://sematext.com/s
My first guess would be a classpath error given this
references lucene3x.
Since all that's deprecated, is there any chance you're
somehow getting a current trunk (5x) jar in there somehow?
Because I see no such error when I start 4.0...
Best
Erick
On Mon, Oct 15, 2012 at 8:42 AM, Bernd Fehling
: SEVERE: null:java.lang.IllegalAccessError:
: class org.apache.lucene.codecs.lucene3x.PreFlexRWPostingsFormat cannot access
: its superclass org.apache.lucene.codecs.lucene3x.Lucene3xPostingsFormat
that sounds like a classpath error.
: Very strange, because some lines earlier in the logs I have
After a bit of research; I realized that if I am using EmbeddedSolrServer
then I need to also do a hard commit in the Searcher (which runs in a
separate jvm). So I tried that; but am getting a LockException. Looks like
the EmbeddedSolrServer locks the Solr index for writing & when I try to do a
com
Hi Matt.
The documentation is here:
http://wiki.apache.org/solr/SolrAdaptersForLuceneSpatial4
The sort / relevancy section is a TODO; I've been improving this document
lately a bit at a time lately.
My comments are within...
On Oct 5, 2012, at 10:10 AM, Matt Mitchell wrote:
> Hi,
>
> Apolog
: I'm trying to get a Solr4 install going, building without slf4j bindings. I
...
: If I use the standard .war file, sharedLib works as I would expect it to. The
: solr.xml file is found and it finds the sharedLib directory just fine, as you
: can see from this log excerpt:
...
:
: As an interim measure, I tried putting the jars in a separate directory and
: added a commandline option for the classpath. I also downgraded to 1.6.4,
: because despite asking for a war without it, the war still contains slf4j-api
: version 1.6.4. The log still shows that it failed to find a l
> slf4j-api has to match what solr was compiled against so that the logging
> calls solr makes will still work.
To my knowledge, that's not strictly true:
http://www.slf4j.org/faq.html#compatibility
Michael Della Bitta
Appinions
18 East 41st Str
On 10/15/2012 12:38 PM, Alan Woodward wrote:
See discussion on https://issues.apache.org/jira/browse/SOLR-3843, this was
apparently intentional.
That also links to the following:
http://wiki.apache.org/solr/SolrConfigXml#codecFactory, which suggests you need
to use solr.SchemaCodecFactory for
: on Tomcat I setup the system property pointing to solr/home path,
: unfortunatelly when I start tomcat the solr.xml is ignored and only the
Please elaborate on how exactly you pointed tomcat at your solr/home.
you mentioned "system property" but when using system properties to set
the Solr Ho
: I have autocommit turned completely off -- both values set to zero. The DIH
...
: When I first set this up back on 1.4.1, I had some kind of severe problem when
: autocommit was turned on. I can no longer remember what it caused, but it was
: a huge showstopper of some kind.
the key q
On 10/15/2012 1:32 PM, Chris Hostetter wrote:
I think one, or both, of us is confused about how the dist-war-excl-slf4j
target is intended to be used.
You arevery likely correct, and it's probably me that's confused.
I'm fairly certain you can't try to use slf4j/log4j from the sharedLib --
be
On 10/15/2012 2:51 PM, Chris Hostetter wrote:
For your usecase and upgrade: don't add the updateLog to your configs, and
don't add autocommit to your configs, and things should work fine. if you
decide you wnat to start using something that requires the updateLog, you
should probably add a short
I see that when there are 0 results with the grouping enabled, the max
score is -Infinity which causes parsing problems on my client. Without
grouping enabled the max score is 0.0. Is there any particular reason
for this difference? If not, would there be any resistance to
submitting a patch that w
Is there any way to easily determine how many documents exist in a
Lucene index segment? Ideally I want to check the document counts in
segments on an index that is being built by a large MySQL dataimport,
before the dataimport completes. If that's not possible, I can take
steps to do a small
Easiest way I know of without parsing any of the index files is to take the
size of the fdx file in bytes and divide by 8. This will give you the exact
number of documents before 4.0, and a close approximation in 4.0.
Though, the fdx file might not be on disk if you haven't committed.
-Michael
Hi,
I am using mysql for solr indexing data in solr. I have two fields: "name"
and "college". How can I add auto suggest based on these two fields?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Autocomplete-tp4013859.html
Sent from the Solr - User mailing list archive
> I am using mysql for solr indexing data in solr. I have two
> fields: "name"
> and "college". How can I add auto suggest based on these two
> fields?
Here is a blog post and code an example.
http://www.cominvent.com/2012/01/25/super-flexible-autocomplete-with-solr/
http://find.searchhub.org/?q=autosuggest+OR+autocomplete
- Original Message -
| From: "Rahul Paul"
| To: solr-user@lucene.apache.org
| Sent: Monday, October 15, 2012 9:01:14 PM
| Subject: Solr Autocomplete
|
| Hi,
| I am using mysql for solr indexing data in solr. I have two fields:
| "n
On 10/15/2012 8:06 PM, Michael Ryan wrote:
Easiest way I know of without parsing any of the index files is to take the
size of the fdx file in bytes and divide by 8. This will give you the exact
number of documents before 4.0, and a close approximation in 4.0.
Though, the fdx file might not be
The solr home dir is as suggested for solr 4.0 to be located below jetty.
So my directory structure is:
/srv/www/solr/solr-4.0.0/
-- dist ** has all apache solr and lucene libs not in .war
-- lib** has all other libs not in .war and not in dist, but required
-- jetty ** the jetty copied from
52 matches
Mail list logo