--- On Mon, 12/20/10, Lance Norskog wrote:
> You said: Currently these tables all
> live in the same database, but in
> the future they may be moved to different servers to scale
> out if the
> needs arise.
>
> That's why I concentrated on the JDBC url problem.
>
> But you can use a file as a
You said: Currently these tables all live in the same database, but in
the future they may be moved to different servers to scale out if the
needs arise.
That's why I concentrated on the JDBC url problem.
But you can use a file as a list of tables. Read each line, and a
sub-entity can substitute
If you have a load balancer available, that is a much cleaner solution
than anything else. After the main indexer comes back, you have to get
the current index state to it to start again. But otherwise
On Sun, Dec 19, 2010 at 10:39 AM, Upayavira wrote:
>
>
> On Sun, 19 Dec 2010 10:20 -0800, "Tri
On Mon, Dec 20, 2010 at 3:13 AM, Cameron Hurst wrote:
> I am at the point in my set up that I am happy with how things are being
> indexed and my interface is all good to go but what I don't know how to
> judge is how often it will be queried and how much resources it needs to
> function properly.
> http://localhost:8983/solr/select?clean=false&commit=true&qt=%2Fdataimport&command=full-import";>Full
> Import
> http://localhost:8983/solr/select?clean=false&commit=true&qt=%2Fdataimport&command=reload-config";>Reload
> Configuration
>
> All,
>
> The links above are meant for me to reload the
--- On Mon, 12/20/10, Adam Estrada wrote:
> From: Adam Estrada
> Subject: [Reload-Config] not working
> To: solr-user@lucene.apache.org
> Date: Monday, December 20, 2010, 5:33 AM
> http://localhost:8983/solr/select?clean=false&commit=true&qt=%2Fdataimport&command=full-import";>Full
> Import
> h
http://localhost:8983/solr/select?clean=false&commit=true&qt=%2Fdataimport&command=full-import";>Full
Import
http://localhost:8983/solr/select?clean=false&commit=true&qt=%2Fdataimport&command=reload-config";>Reload
Configuration
All,
The links above are meant for me to reload the configuration fi
> I have a custom library, which is used to input a file path
> and it returns
> file content as a string output.
> My DB has a file path in one of the table and using DIH
> configuration in
> Solr to do the indexing. I couldnt use TikaEntityProcessor
> to do indexing of
> a file located in file sy
Hi,
I have a custom library, which is used to input a file path and it returns
file content as a string output.
My DB has a file path in one of the table and using DIH configuration in
Solr to do the indexing. I couldnt use TikaEntityProcessor to do indexing of
a file located in file system. I thou
This is helpful. Thank you.
--- On Sun, 12/19/10, Dennis Gearon wrote:
> From: Dennis Gearon
> Subject: Re: DIH for sharded database?
> To: solr-user@lucene.apache.org
> Date: Sunday, December 19, 2010, 11:56 AM
> Some talk on giant databases in
> postgres:
>
> http://wiki.postgresql.org/ima
Hi,
I watched the Lucid webcast:
http://www.lucidimagination.com/solutions/webcasts/faceting
It talks about encoding hierarchical categories to facilitate faceting. So a
category "path" of "NonFic>Science" would be encoded as the multivalues
"0/NonFic" & "1/NonFic/Science".
1) My categories ar
On 19.12.2010, at 23:30, Alexey Serba wrote:
>
> Also Ephraim proposed a really neat solution with GROUP_CONCAT, but
> I'm not sure that all RDBMS-es support that.
Thats MySQL only syntax.
But if you google you can find similar solution for other RDBMS.
regards,
Lukas Kahwe Smith
m...@pooteew
> With subquery and with left join: 320k in 6 Min 30
It's 820 records per second. It's _really_ impressive considering the
fact that DIH performs separate sql query for every record in your
case.
>> So there's one track entity with an artist sub-entity. My (admittedly
>> rather limited) experien
I am at the point in my set up that I am happy with how things are being
indexed and my interface is all good to go but what I don't know how to
judge is how often it will be queried and how much resources it needs to
function properly. So what I am looking for is some sort of performance
monitorin
Hi Pavel,
I had the similar problem several years ago - I had to find
geographical locations in textual descriptions, geocode these objects
to lat/long during indexing process and allow users to filter/sort
search results to specific geographical areas. The important issue was
that there were seve
On Sun, 19 Dec 2010 10:20 -0800, "Tri Nguyen"
wrote:
> How do we tell the slaves to point to the new master without modifying
> the config files? Can we do this while the slave is up, issuing a
> command to it?
I believe this can be done (details are in
http://wiki.apache.org/solr/SolrReplicat
Jan,
I'd appreciate a little more explanation here. I've explored SolrCloud
somewhat, but there's some bits of this architecture I don't yet get.
You say, next time an "indexer slave" pings ZK. What is an "indexer
slave"? Is that the external entity that is doing posting indexing
content? If this
How do we tell the slaves to point to the new master without modifying the
config files? Can we do this while the slave is up, issuing a command to it?
Thanks,
Tri
--- On Sun, 12/19/10, Upayavira wrote:
From: Upayavira
Subject: Re: master master, repeaters
To: solr-user@lucene.apache.org
We had a (short) thread on this late last week.
Solr doesn't support automatic failover of the master, at least in
1.4.1. I've been discussing with my colleague (Tommaso) about ways to
achieve this.
There's ways we could 'fake it', scripting the following:
* set up a 'backup' master, as a repl
Some talk on giant databases in postgres:
http://wiki.postgresql.org/images/3/38/PGDay2009-EN-Datawarehousing_with_PostgreSQL.pdf
wikipedia
http://en.wikipedia.org/wiki/Partition_%28database%29
(says to use a UNION)
postgres description on how to do it:
http://www.postgresql.org/docs/curr
The easiest way, and the way that the database needs to use those shards,
probably, is to use a view with a queiry and I think it joins on the primary
key.
Dennis Gearon
Signature Warning
It is always a good idea to learn from your own mistakes. It is usually a
better
idea
On 12/19/2010 2:07 AM, Tri Nguyen wrote:
Was wondering about the pro's and con's of using sharding versus cores.
An index can be split up to multiple cores or multilple shards.
So why one over the other?
If you split your index into multiple cores, you still have to use the
shards parameter
Well, they can be different beasts. First of all, different cores can have
different schemas, which is not true of shards. Also, shards are almost
assumed to be running on different machines as a scaling technique,
whereas it multiple cores are run on a single Solr instance.
So using multiple core
Hi,
Was wondering about the pro's and con's of using sharding versus cores.
An index can be split up to multiple cores or multilple shards.
So why one over the other?
Thanks,
tri
Hi,
In the master-slave configuration, I'm trying to figure out how to configure
the
system setup for master failover.
Does solr support master-master setup? From my readings, solr does not.
I've read about repeaters as well where the slave can act as a master. When
the
main master goes do
25 matches
Mail list logo