Aaron - I for one sympathize.  When I pause to think of the stacks upon stacks 
of technologies that something like Solr are built upon… my head spins and I 
feel for the folks coming to computer science these days and having the whole 
Java and Big Data stacks and all that goes along with that (JVM/mem/GC up to 
network topology and architecture with 3xZK, plus NxM Solr’s, and beyond to 
data modeling, schema design, and query parameter adjusting).

---

It’s good for us to hear the ugly/painful side of folks experiences.  It’s 
driven us to to where I find myself iterating with Solr in my day job like 
this….

   $ bin/solr create -c my_collection
   $ bin/post -c my_collection /data/docs.json

and http://… /select?q=…&wt=csv…

So “it works for me”, but that’s not a nice way to approach the struggles of 
users.   Though we’ve come a long way, we’ve got a ways to go as well.

        Erik

p.s. - 

> Never mind the fact that the XML-based configuration process is an antiquated 
> nightmare when the rest of the world has long since moved onto databases.

Well, to that point - the world that I work in really boils down to at least 
plain text (alas, mostly JSON these days, but even that’s an implementation 
detail) stuffed into git repositories, and played into new Solr environments by 
uploading configuration files, or more modernly, hitting the Solr configuration 
API’s to add/configure fields, set up request handlers, and the basics of what 
needs to be done.  No XML needed these days.   No (relational, JDBC) databases 
either, for that matter :)

> Maybe this will help someone else out there.

Thanks for taking the time to detail your struggles to the community.  It is 
helpful to see where the rough edges are in this whole business, and smoothing 
them out.   But it’s no easy business, having these stacks of dependencies and 
complexities piled on top of one another and trying to get it all fired up 
properly and usably.

        Erik

Reply via email to