REST API Alternative to admin/luke

2014-12-04 Thread Constantin Wolber
Hi,

we use dynamic Fields in our schema.

If I use the admin/luke URL all those dynamic fields are listed with their 
actual name.

If I use the rest endpoint /schema/fields only the hard coded fields are 
returned. And dynamicFields only returns the definition of the dynamicFields. I 
was expecting a similar result like the one through the admin/luke URL which 
would return all the hard coded fields and all the ones dynamically being 
generated.

Does anybody know if there exists something like this or is luke the only 
chance I have?

Regards

Constantin

--
Constantin Wolber
Head of IT and Product Management

medical columbus AG
Herzog-Adolph-Str. 7
D-61462 Königstein
www.medicalcolumbus.de<http://www.medicalcolumbus.de/>


tel + 49 (0) 61 74 / 96 17-60
fax +49 (0) 61 74 / 96 17-10
constantin.wol...@medicalcolumbus.de


Amtsgericht Königstein, HRB 4906
Vorstand: Dirk Isenberg
Aufsichtsratsvorsitzender: Manfred Hellwig

[Beschreibung: medical columbus logo]



Re: REST API Alternative to admin/luke

2014-12-04 Thread Constantin Wolber
Hi,

And thanks for the answers. So my understanding is at least correct that I did 
not oversee a feature of the rest endpoints. So probably we will stick with the 
admin/luke endpoint to achieve our goal. 

Since you have been telling me a lot about the xy problem, I will of course 
give you some more information regarding the X. 

In the application we integrate the search based on solr. We don't know exactly 
all fields that will be indexed they can change dynamically. So our idea was to 
build up a general search service that will connect to the solr core and build 
the available search options dynamically. If that service encounters a field 
that is of type tint it can offer a range search to the user. With that 
approach it would require no change of the application if a new field is added.

Hope that makes it a bit clearer 

Regards

Constantin 


> Am 04.12.2014 um 18:12 schrieb Chris Hostetter :
> 
> 
> : Subject: REST API Alternative to admin/luke
> 
> this smells like an XY problem ... if /admin/luke gives you the data you 
> want, why not use /admin/luke ? ... what is it about /admin/luke that 
> prevents you from solving your problem? what is your ultimate goal?
> 
> : If I use the admin/luke URL all those dynamic fields are listed with their 
> actual name.
>...
> : dynamicFields. I was expecting a similar result like the one through the 
> : admin/luke URL which would return all the hard coded fields and all the 
> : ones dynamically being generated.
> 
> https://people.apache.org/~hossman/#xyproblem
> XY Problem
> 
> Your question appears to be an "XY Problem" ... that is: you are dealing
> with "X", you are assuming "Y" will help you, and you are asking about "Y"
> without giving more details about the "X" so that we can understand the
> full issue.  Perhaps the best solution doesn't involve "Y" at all?
> See Also: http://www.perlmonks.org/index.pl?node_id=542341
> 
> 
> 
> -Hoss
> http://www.lucidworks.com/


Re: REST API Alternative to admin/luke

2014-12-04 Thread Constantin Wolber
Hi,

Basically using an endpoint in the admin section is something that makes me 
think if there is an alternative.

And it would have been nice to have a straight forward resource oriented 
approach. Which the Luke certainly is not. 

Regards 

Constantin



> Am 04.12.2014 um 20:46 schrieb Chris Hostetter :
> 
> 
> : I did not oversee a feature of the rest endpoints. So probably we will 
> : stick with the admin/luke endpoint to achieve our goal.
> 
> Ok ... i mean ... yeah -- the /admin/luke endpoint exists to tell you what 
> fields are *actually* in your index, regardless of who/how they are in 
> your index.
> 
> the Schema API is for letting you do CRUD operations on your *schema* - 
> even if those fields (or dynamic fields patterns) aren't used in your 
> index.
> 
> so based on what you said your goal is, /admin/luke is exactly what you 
> want.
> 
> but since you already knew about /admin/luke, and already knew it gave you 
> exactly what you wanted, i'm stll not sure i understand what prompted you 
> to ask your question about trying tofind a diff way of doing this... ?
> 
> -Hoss
> http://www.lucidworks.com/


DataImportHandler: Problems with delta-import and CachedSqlEntityProcessor

2013-06-20 Thread Constantin Wolber
Hi,

i searched for a solution for quite some time but did not manage to find some 
real hints on how to fix it. 


I'm using solr 4.3.0 1477023 - simonw - 2013-04-29 15:10:12 running in a tomcat 
6 container.

My data import setup is basically the following:

Data-config.xml:








   



Ok now for the problem: 

At first I tried everything without the Cache. But the full-import took a very 
long time. Because the attributes query is pretty slow compared to the rest. As 
a result I got a processing speed of around 150 Documents/s.
When switching everything to the CachedSqlEntityProcessor the full import 
processed at the speed of 4000 Documents/s

So full import is running quite fine. Now I wanted to use the delta import. 
When running the delta import I was expecting the ramp up time to be about the 
same as in full import since I need to load the whole table supplier and 
attributes to the cache in the first step. But when looking into the log file 
the weird thing is solr seems to refresh the Cache for every single document 
that is processed. So currently my delta-import is a lot slower than the 
full-import. I even tried to add the deltaImportQuery parameter to the entity 
but it doesn't change the behavior at all (of course I know it is not supposed 
to change anything in the setup I run).

The following solutions would be possible in my opinion: 

1. Is there any way to tell the config to ignore the Cache when running a delta 
import? That would help already because we are talking about the maximum of 500 
documents changed in 15 minutes compared to over 5 million documents in total. 
2. Get solr to not refresh the cash for every document. 

Best Regards

Constantin Wolber



AW: DataImportHandler: Problems with delta-import and CachedSqlEntityProcessor

2013-06-20 Thread Constantin Wolber
Hi,

and thanks for the answer. But I'm a little bit confused about what you are 
suggesting. 
I did not really use the rootEntity attribute before. But from what I read in 
the documentation as far as I can tell that would result in two documents 
(maybe with the same id which would probably result in only one document being 
stored) because one for each root entity.

It would be great if you could just sketch the setup with the entities I 
provided. Because currently I have no idea on how to do it. 

Regards

Constantin


-Ursprüngliche Nachricht-
Von: Noble Paul നോബിള്‍ नोब्ळ् [mailto:noble.p...@gmail.com] 
Gesendet: Donnerstag, 20. Juni 2013 15:42
An: solr-user@lucene.apache.org
Betreff: Re: DataImportHandler: Problems with delta-import and 
CachedSqlEntityProcessor

it is possible to create two separate root entities . one for full-import and 
another for delta. for the delta-import you can skip Cache that way



On Thu, Jun 20, 2013 at 1:50 PM, Constantin Wolber < 
constantin.wol...@medicalcolumbus.de> wrote:

> Hi,
>
> i searched for a solution for quite some time but did not manage to 
> find some real hints on how to fix it.
>
>
> I'm using solr 4.3.0 1477023 - simonw - 2013-04-29 15:10:12 running in 
> a tomcat 6 container.
>
> My data import setup is basically the following:
>
> Data-config.xml:
>
>  name="article"
> dataSource="ds1"
> query="SELECT * FROM article"
> deltaQuery="SELECT myownid FROM articleHistory WHERE 
> modified_date > '${dih.last_index_time}
> deltaImportQuery="SELECT * FROM article WHERE 
> myownid=${dih.delta.myownid}"
> pk="myownid">
> 
>
>  name="supplier"
> dataSource="ds2"
> query="SELECT * FROM supplier WHERE status=1"
> processor="CachedSqlEntityProcessor"
> cacheKey="SUPPLIER_ID"
> cacheLookup="article.ARTICLE_SUPPLIER_ID">
> 
>
>  name="attributes"
> dataSource="ds1"
> query="SELECT ARTICLE_ID,'Key:'+ATTRIBUTE_KEY+'
> Value:'+ATTRIBUTE_VALUE FROM attributes"
> cacheKey="ARTICLE_ID"
> cacheLookup="article.myownid"
> processor="CachedSqlEntityProcessor">
> 
> 
>
>
> Ok now for the problem:
>
> At first I tried everything without the Cache. But the full-import 
> took a very long time. Because the attributes query is pretty slow 
> compared to the rest. As a result I got a processing speed of around 150 
> Documents/s.
> When switching everything to the CachedSqlEntityProcessor the full 
> import processed at the speed of 4000 Documents/s
>
> So full import is running quite fine. Now I wanted to use the delta 
> import. When running the delta import I was expecting the ramp up time 
> to be about the same as in full import since I need to load the whole 
> table supplier and attributes to the cache in the first step. But when 
> looking into the log file the weird thing is solr seems to refresh the 
> Cache for every single document that is processed. So currently my 
> delta-import is a lot slower than the full-import. I even tried to add 
> the deltaImportQuery parameter to the entity but it doesn't change the 
> behavior at all (of course I know it is not supposed to change anything in 
> the setup I run).
>
> The following solutions would be possible in my opinion:
>
> 1. Is there any way to tell the config to ignore the Cache when 
> running a delta import? That would help already because we are talking 
> about the maximum of 500 documents changed in 15 minutes compared to 
> over 5 million documents in total.
> 2. Get solr to not refresh the cash for every document.
>
> Best Regards
>
> Constantin Wolber
>
>


--
-
Noble Paul


AW: DataImportHandler: Problems with delta-import and CachedSqlEntityProcessor

2013-06-20 Thread Constantin Wolber
Hi,

i may have been a little to fast with my response. 

After reading a bit more I imagine you meant running the full-import with the 
entity param for the root entity for full import. And running the delta import 
with the entity param for the delta entity. Is that correct?

Regards

Constantin


-Ursprüngliche Nachricht-
Von: Constantin Wolber [mailto:constantin.wol...@medicalcolumbus.de] 
Gesendet: Donnerstag, 20. Juni 2013 16:42
An: solr-user@lucene.apache.org
Betreff: AW: DataImportHandler: Problems with delta-import and 
CachedSqlEntityProcessor

Hi,

and thanks for the answer. But I'm a little bit confused about what you are 
suggesting. 
I did not really use the rootEntity attribute before. But from what I read in 
the documentation as far as I can tell that would result in two documents 
(maybe with the same id which would probably result in only one document being 
stored) because one for each root entity.

It would be great if you could just sketch the setup with the entities I 
provided. Because currently I have no idea on how to do it. 

Regards

Constantin


-Ursprüngliche Nachricht-
Von: Noble Paul നോബിള്‍ नोब्ळ् [mailto:noble.p...@gmail.com]
Gesendet: Donnerstag, 20. Juni 2013 15:42
An: solr-user@lucene.apache.org
Betreff: Re: DataImportHandler: Problems with delta-import and 
CachedSqlEntityProcessor

it is possible to create two separate root entities . one for full-import and 
another for delta. for the delta-import you can skip Cache that way



On Thu, Jun 20, 2013 at 1:50 PM, Constantin Wolber < 
constantin.wol...@medicalcolumbus.de> wrote:

> Hi,
>
> i searched for a solution for quite some time but did not manage to 
> find some real hints on how to fix it.
>
>
> I'm using solr 4.3.0 1477023 - simonw - 2013-04-29 15:10:12 running in 
> a tomcat 6 container.
>
> My data import setup is basically the following:
>
> Data-config.xml:
>
>  name="article"
> dataSource="ds1"
> query="SELECT * FROM article"
> deltaQuery="SELECT myownid FROM articleHistory WHERE 
> modified_date > '${dih.last_index_time}
> deltaImportQuery="SELECT * FROM article WHERE 
> myownid=${dih.delta.myownid}"
> pk="myownid">
> 
>
>  name="supplier"
> dataSource="ds2"
> query="SELECT * FROM supplier WHERE status=1"
> processor="CachedSqlEntityProcessor"
> cacheKey="SUPPLIER_ID"
> cacheLookup="article.ARTICLE_SUPPLIER_ID">
> 
>
>  name="attributes"
> dataSource="ds1"
> query="SELECT ARTICLE_ID,'Key:'+ATTRIBUTE_KEY+'
> Value:'+ATTRIBUTE_VALUE FROM attributes"
> cacheKey="ARTICLE_ID"
> cacheLookup="article.myownid"
> processor="CachedSqlEntityProcessor">
> 
> 
>
>
> Ok now for the problem:
>
> At first I tried everything without the Cache. But the full-import 
> took a very long time. Because the attributes query is pretty slow 
> compared to the rest. As a result I got a processing speed of around 150 
> Documents/s.
> When switching everything to the CachedSqlEntityProcessor the full 
> import processed at the speed of 4000 Documents/s
>
> So full import is running quite fine. Now I wanted to use the delta 
> import. When running the delta import I was expecting the ramp up time 
> to be about the same as in full import since I need to load the whole 
> table supplier and attributes to the cache in the first step. But when 
> looking into the log file the weird thing is solr seems to refresh the 
> Cache for every single document that is processed. So currently my 
> delta-import is a lot slower than the full-import. I even tried to add 
> the deltaImportQuery parameter to the entity but it doesn't change the 
> behavior at all (of course I know it is not supposed to change anything in 
> the setup I run).
>
> The following solutions would be possible in my opinion:
>
> 1. Is there any way to tell the config to ignore the Cache when 
> running a delta import? That would help already because we are talking 
> about the maximum of 500 documents changed in 15 minutes compared to 
> over 5 million documents in total.
> 2. Get solr to not refresh the cash for every document.
>
> Best Regards
>
> Constantin Wolber
>
>


--
-
Noble Paul


AW: DataImportHandler: Problems with delta-import and CachedSqlEntityProcessor

2013-06-21 Thread Constantin Wolber
Hi,

thanks for the other ideas. 

I worked around the problem with the idea of Paul Noble. This is working really 
fine for me right now. My full-import is at around 40 minutes and my 
delta-import runs in less than 10 seconds, because it runs every minute. So 
that configuration seems to be pretty optimal for my set up. 

Idea 1: 
Will try it out some time soon

Idea2: 
Tried that one. But that slows down the full-import in my case too. One table I 
use the cache for has more rows than the root entities table. So it has 
multiple rows per row of the root entity. So a cache that is being build up 
during the import does not help here since caches rows are only used once. The 
only benefit I have from the cache is through the prefilling. The prefilling is 
a lot faster than reading the rows on demand. 

Idea3: 
Probably would also be slower than the current configuration since the 
prefilling takes around 2 minutes. Since my delta-import currently runs every 
minute that would not make sense. 


Thanks and Regards

Constantin

-Ursprüngliche Nachricht-
Von: Dyer, James [mailto:james.d...@ingramcontent.com] 
Gesendet: Donnerstag, 20. Juni 2013 18:51
An: solr-user@lucene.apache.org
Betreff: RE: DataImportHandler: Problems with delta-import and 
CachedSqlEntityProcessor

Instead of specifying CachedSqlEntityProcessor, you can specify 
SqlEntityProcessor with "cacheImpl='SortedMapBackedCache'".  If you 
parametertize this, to have "SortedMapBackedCache" for full updates but blank 
for deltas I think it will cache only on the full import.

Another option is to parameterize the child queries with a "where" clause, so 
if it is creating a new cache with every row, the cache will only contain the 
data needed for that child row.

A third option is to do your delta imports like described here:  
http://wiki.apache.org/solr/DataImportHandlerDeltaQueryViaFullImport
My experience is that this generally performs better than using the delta 
import feature anyhow.  The trick is on handling deletes, which will require 
its own entity and the $deleteDocById command.  See 
http://wiki.apache.org/solr/DataImportHandler#Special_Commands

But these are all workarounds.  This sounds like a bug or some subtle 
configuration problem.  I looked through the JIRA issues and did not see 
anything like this reported yet, but if you're pretty sure you are doing 
everything correctly you may want to open a bug ticket.  Be sure to flag it as 
"contrib - Dataimporthandler".

James Dyer
Ingram Content Group
(615) 213-4311


-Original Message-
From: Constantin Wolber [mailto:constantin.wol...@medicalcolumbus.de] 
Sent: Thursday, June 20, 2013 3:21 AM
To: solr-user@lucene.apache.org
Subject: DataImportHandler: Problems with delta-import and 
CachedSqlEntityProcessor

Hi,

i searched for a solution for quite some time but did not manage to find some 
real hints on how to fix it. 


I'm using solr 4.3.0 1477023 - simonw - 2013-04-29 15:10:12 running in a tomcat 
6 container.

My data import setup is basically the following:

Data-config.xml:








   



Ok now for the problem: 

At first I tried everything without the Cache. But the full-import took a very 
long time. Because the attributes query is pretty slow compared to the rest. As 
a result I got a processing speed of around 150 Documents/s.
When switching everything to the CachedSqlEntityProcessor the full import 
processed at the speed of 4000 Documents/s

So full import is running quite fine. Now I wanted to use the delta import. 
When running the delta import I was expecting the ramp up time to be about the 
same as in full import since I need to load the whole table supplier and 
attributes to the cache in the first step. But when looking into the log file 
the weird thing is solr seems to refresh the Cache for every single document 
that is processed. So currently my delta-import is a lot slower than the 
full-import. I even tried to add the deltaImportQuery parameter to the entity 
but it doesn't change the behavior at all (of course I know it is not supposed 
to change anything in the setup I run).

The following solutions would be possible in my opinion: 

1. Is there any way to tell the config to ignore the Cache when running a delta 
import? That would help already because we are talking about the maximum of 500 
documents changed in 15 minutes compared to over 5 million documents in total. 
2. Get solr to not refresh the cash for every document. 

Best Regards

Constantin Wolber