Solr confFiles replication
Hello Apache Solr users, I have master-slave replication setup, and slave is getting index data replicated but not configured confFiles. What could be the problem? Solr 1.4.1 is used. Regards, Stevo.
Start solr unsuccessfully on Geronimo
Hello all - Could any one please shed a light to the hassle issue below when *start * Solr on Geronimo. === org.apache.jasper.JasperException: *java.lang.IllegalStateException: No org.apache.InstanceManager set in ServletContext* org.apache.jasper.servlet.JspServletWrapper.getServlet(JspServletWrapper.java:151) org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:324) org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:313) org.apache.jasper.servlet.JspServlet.service(JspServlet.java:260) javax.servlet.http.HttpServlet.service(HttpServlet.java:806) org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:286) === Additional info.: - deployed successfully but error when start. It starts well on Tomcat and Jetty - use the solr 1.4.1. - Geronimo 2.1.6 - OS: Unix and Win 7 (both got the same error) - other applications (war) run well on this Geronimo - jasper plugins are installed (jasper and jasper deployer) Thanks much!
Re: Solr confFiles replication
Check your configuration and log file. And, remember, log files will only get replicated if their hashes are different. And, new configuration files will not be replicated, you'll need to upload them to the slaves manually for the first time. Slaves will not replicate what they don't have. > Hello Apache Solr users, > > I have master-slave replication setup, and slave is getting index data > replicated but not configured confFiles. What could be the problem? > Solr 1.4.1 is used. > > Regards, > Stevo.
Re: Solr confFiles replication
Thanks Markus for the insight! I've figured out that initially conf files need to be put manually on slaves so slaves know how to connect to master to start polling. I've attempted several times to send this question of mine to solr-user mailing list, got refused with spam qualifications, found it was because email was in html format. After switching to plain text, email reached mailing list but I've stripped off information during attempts and didn't mention that replication of index data works - only conf file replication doesn't work. Maybe hashes of conf files are the issue here. Are they calculated automatically by master and slave? I assume protocol is same as for index data, where slave issues replicaiton request, gets in response list of conf files with metadata including hashes that master calculated for its conf files configured for replication, slave then calculates hashes of its local conf files and does comparison with metadata received from master, and decides whether to download or not conf files. SolrReplication wiki page mentions "Only files in the 'conf' dir of the solr instance are replicated." (wish I could underline that "solr instance" fragment) - in my case there are two cores/indexes on single solr instance, where each core has its own /conf (and /data) dir - since index data replication works well (appropriate core index data is replicated) I assume that it's only wrong/incomplete sentence that instance conf dir is mentioned and not core conf dir. Same wiki page also mentiones "The files are replicated only along with a fresh index. That means even if a file is changed in the master the file is replicated only after there is a new commit/optimize on the master. ". This sentence doesn't mention after startup conf files replication. Does this mean that schema.xml replication will not occur after master startup until commit/optimize is issued in case when all of the following is done: - schema.xml is listed in confFiles - master is configured to replicateAfter startup, or commit or optimize - master gets brought down - master index data is deleted - master schema.xml is changed - and master is started up again? Regards, Stevo. On Tue, Dec 28, 2010 at 1:06 PM, Markus Jelsma wrote: > Check your configuration and log file. And, remember, log files will only get > replicated if their hashes are different. And, new configuration files will > not > be replicated, you'll need to upload them to the slaves manually for the first > time. Slaves will not replicate what they don't have. > >> Hello Apache Solr users, >> >> I have master-slave replication setup, and slave is getting index data >> replicated but not configured confFiles. What could be the problem? >> Solr 1.4.1 is used. >> >> Regards, >> Stevo. >
Re: Solr confFiles replication
On Tuesday 28 December 2010 15:02:24 Stevo Slavić wrote: > Thanks Markus for the insight! > > I've figured out that initially conf files need to be put manually on > slaves so slaves know how to connect to master to start polling. I've > attempted several times to send this question of mine to solr-user > mailing list, got refused with spam qualifications, found it was > because email was in html format. After switching to plain text, email > reached mailing list but I've stripped off information during attempts > and didn't mention that replication of index data works - only conf > file replication doesn't work. Maybe hashes of conf files are the > issue here. Are they calculated automatically by master and slave? I > assume protocol is same as for index data, where slave issues > replicaiton request, gets in response list of conf files with metadata > including hashes that master calculated for its conf files configured > for replication, slave then calculates hashes of its local conf files > and does comparison with metadata received from master, and decides > whether to download or not conf files. Well, that's about how it works in a nut shell. > > SolrReplication wiki page mentions "Only files in the 'conf' dir of > the solr instance are replicated." (wish I could underline that "solr > instance" fragment) - in my case there are two cores/indexes on single > solr instance, where each core has its own /conf (and /data) dir - > since index data replication works well (appropriate core index data > is replicated) I assume that it's only wrong/incomplete sentence that > instance conf dir is mentioned and not core conf dir. Replication in multi core works as expected. In this case instance dir equals solr/corename/conf/. > > Same wiki page also mentiones "The files are replicated only along > with a fresh index. That means even if a file is changed in the master > the file is replicated only after there is a new commit/optimize on > the master. ". This sentence doesn't mention after startup conf files > replication. Does this mean that schema.xml replication will not occur > after master startup until commit/optimize is issued in case when all > of the following is done: > - schema.xml is listed in confFiles > - master is configured to replicateAfter startup, or commit or optimize > - master gets brought down > - master index data is deleted > - master schema.xml is changed > - and master is started up again? Configuration files will only be sent over when index files are to be replicated. So if the master is reindexed, it will generate a new indexVersion, triggering the replication events on the slaves. Then the configuration files are replicated as well. Forcing replication won't replicatie configuration files iirc. > > Regards, > Stevo. > > On Tue, Dec 28, 2010 at 1:06 PM, Markus Jelsma > > wrote: > > Check your configuration and log file. And, remember, log files will only > > get replicated if their hashes are different. And, new configuration > > files will not be replicated, you'll need to upload them to the slaves > > manually for the first time. Slaves will not replicate what they don't > > have. > > > >> Hello Apache Solr users, > >> > >> I have master-slave replication setup, and slave is getting index data > >> replicated but not configured confFiles. What could be the problem? > >> Solr 1.4.1 is used. > >> > >> Regards, > >> Stevo. -- Markus Jelsma - CTO - Openindex http://www.linkedin.com/in/markus17 050-8536620 / 06-50258350
Re: Solr confFiles replication
Clear, have to reindex. Thanks! Regards, Stevo. On Tue, Dec 28, 2010 at 3:17 PM, Markus Jelsma wrote: > > > On Tuesday 28 December 2010 15:02:24 Stevo Slavić wrote: >> Thanks Markus for the insight! >> >> I've figured out that initially conf files need to be put manually on >> slaves so slaves know how to connect to master to start polling. I've >> attempted several times to send this question of mine to solr-user >> mailing list, got refused with spam qualifications, found it was >> because email was in html format. After switching to plain text, email >> reached mailing list but I've stripped off information during attempts >> and didn't mention that replication of index data works - only conf >> file replication doesn't work. Maybe hashes of conf files are the >> issue here. Are they calculated automatically by master and slave? I >> assume protocol is same as for index data, where slave issues >> replicaiton request, gets in response list of conf files with metadata >> including hashes that master calculated for its conf files configured >> for replication, slave then calculates hashes of its local conf files >> and does comparison with metadata received from master, and decides >> whether to download or not conf files. > > Well, that's about how it works in a nut shell. > >> >> SolrReplication wiki page mentions "Only files in the 'conf' dir of >> the solr instance are replicated." (wish I could underline that "solr >> instance" fragment) - in my case there are two cores/indexes on single >> solr instance, where each core has its own /conf (and /data) dir - >> since index data replication works well (appropriate core index data >> is replicated) I assume that it's only wrong/incomplete sentence that >> instance conf dir is mentioned and not core conf dir. > > Replication in multi core works as expected. In this case instance dir equals > solr/corename/conf/. > >> >> Same wiki page also mentiones "The files are replicated only along >> with a fresh index. That means even if a file is changed in the master >> the file is replicated only after there is a new commit/optimize on >> the master. ". This sentence doesn't mention after startup conf files >> replication. Does this mean that schema.xml replication will not occur >> after master startup until commit/optimize is issued in case when all >> of the following is done: >> - schema.xml is listed in confFiles >> - master is configured to replicateAfter startup, or commit or optimize >> - master gets brought down >> - master index data is deleted >> - master schema.xml is changed >> - and master is started up again? > > Configuration files will only be sent over when index files are to be > replicated. > So if the master is reindexed, it will generate a new indexVersion, triggering > the replication events on the slaves. Then the configuration files are > replicated as well. Forcing replication won't replicatie configuration files > iirc. > >> >> Regards, >> Stevo. >> >> On Tue, Dec 28, 2010 at 1:06 PM, Markus Jelsma >> >> wrote: >> > Check your configuration and log file. And, remember, log files will only >> > get replicated if their hashes are different. And, new configuration >> > files will not be replicated, you'll need to upload them to the slaves >> > manually for the first time. Slaves will not replicate what they don't >> > have. >> > >> >> Hello Apache Solr users, >> >> >> >> I have master-slave replication setup, and slave is getting index data >> >> replicated but not configured confFiles. What could be the problem? >> >> Solr 1.4.1 is used. >> >> >> >> Regards, >> >> Stevo. > > -- > Markus Jelsma - CTO - Openindex > http://www.linkedin.com/in/markus17 > 050-8536620 / 06-50258350 >
geospatial search support for SOLR 1.3 and 1.4?
hi, we are currently using SOLR 1.3 and planning to use location based search for some of functionality. Is there any support for such a thing in 1.3? Do we need to upgrade to 1.4+ version. Thanks Bharat Jain
Re: geospatial search support for SOLR 1.3 and 1.4?
You should upgrade indeed and use a 3rd-party plugin or wait for current trunk (Solr 4) or branch 3.x to be released but that might take a while. But if your data set is small enough and you don't have many updates you could compute the distance sets outside Solr once a day as we did in 1.3. But we only used it to show near by items for a given document, no distance sorting. On Tuesday 28 December 2010 15:55:43 Bharat Jain wrote: > hi, > we are currently using SOLR 1.3 and planning to use location based search > for some of functionality. Is there any support for such a thing in 1.3? Do > we need to upgrade to 1.4+ version. > > Thanks > Bharat Jain -- Markus Jelsma - CTO - Openindex http://www.linkedin.com/in/markus17 050-8536620 / 06-50258350
Re: DIH and UTF-8
if you are using tomcat modify server.xml check the URIEncoding="*UTF-8*" is set
Re: DIH and UTF-8
It was due to the way I was writing to the DB using our rails application. Everythin looked correct but when retrieving it using the JDBC driver it was all managled. On 12/27/10 4:38 PM, Glen Newton wrote: Is it possible your browser is not set up to properly display the chinese characters? (I am assuming you are looking at things through your browser) Do you have any problems viewing other chinese documents properly in your browser? Using mysql, can you see these characters properly? What happens when you use curl or wget to get a document from solr and looking at it using something besides your browser? Yes, I am running out of ideas! :-) -Glen On Mon, Dec 27, 2010 at 7:22 PM, Mark wrote: Just like the user of that thread... i have my database, table, columns and system variables all set but it still doesnt work as expected. Server version: 5.0.67 Source distribution Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> SHOW VARIABLES LIKE 'collation%'; +--+-+ | Variable_name| Value | +--+-+ | collation_connection | utf8_general_ci | | collation_database | utf8_general_ci | | collation_server | utf8_general_ci | +--+-+ 3 rows in set (0.00 sec) mysql> SHOW VARIABLES LIKE 'character_set%'; +--++ | Variable_name| Value | +--++ | character_set_client | utf8 | | character_set_connection | utf8 | | character_set_database | utf8 | | character_set_filesystem | binary | | character_set_results| utf8 | | character_set_server | utf8 | | character_set_system | utf8 | | character_sets_dir | /usr/local/mysql/share/mysql/charsets/ | +--++ 8 rows in set (0.00 sec) Any other ideas? Thanks On 12/27/10 3:23 PM, Glen Newton wrote: [client] default-character-set = utf8 [mysql] default-character-set=utf8 [mysqld] character_set_server = utf8 character_set_client = utf8
Dynamic column names using DIH
Is there a way to create dynamic column names using the values returned from the query? For example:
Re: SpatialTierQueryParserPlugin Loading Error
Hi Grant, I grabbed the latest version from trunk this morning and am still unable to get any of the spatial functionality to work. I still seem to be getting the class loading errors that I was getting when using the patches and jar files I found all over the web. What I really need at this point is an example of solrconfig.xml and whatever else I need to include to make it work properly. I am using the Geonames DB with valid lat/longs in decimal degrees so I'm confident that the data are correct. I have tried several examples all with the same results. There are other patches like the following that show snippets of how to modify the solrconfig file but there is no definitive source... https://issues.apache.org/jira/secure/attachment/12452781/SOLR-2077.Quach.Mattmann.082210.patch.txt I would gladly update this page if I could just get it working. http://wiki.apache.org/solr/SpatialSearch w/r, Adam On Tue, Dec 14, 2010 at 9:04 AM, Grant Ingersoll wrote: > For this functionality, you are probably better off using trunk or > branch_3x. There are quite a few patches related to that particular one > that you will need to apply in order to have it work correctly. > > > On Dec 13, 2010, at 10:06 PM, Adam Estrada wrote: > > > All, > > > > Can anyone shed some light on this error. I can't seem to get this > > class to load. I am using the distribution of Solr from Lucid > > Imagination and the Spatial Plugin from here > > https://issues.apache.org/jira/browse/SOLR-773. I don't know how to > > apply a patch but the jar file is in there. What else can I do? > > > > org.apache.solr.common.SolrException: Error loading class > > 'org.apache.solr.spatial.tier.SpatialTierQueryParserPlugin' > > at > org.apache.solr.core.SolrResourceLoader.findClass(SolrResourceLoader.java:373) > > at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:413) > > at > org.apache.solr.core.SolrCore.createInitInstance(SolrCore.java:435) > > at org.apache.solr.core.SolrCore.initPlugins(SolrCore.java:1498) > > at org.apache.solr.core.SolrCore.initPlugins(SolrCore.java:1492) > > at org.apache.solr.core.SolrCore.initPlugins(SolrCore.java:1525) > > at org.apache.solr.core.SolrCore.initQParsers(SolrCore.java:1442) > > at org.apache.solr.core.SolrCore.(SolrCore.java:548) > > at > org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:137) > > at > org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:83) > > at > org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:99) > > at > org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40) > > at > org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:594) > > at org.mortbay.jetty.servlet.Context.startContext(Context.java:139) > > at > org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1218) > > at > org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:500) > > at > org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:448) > > at > org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40) > > at > org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:147) > > at > org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:161) > > at > org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40) > > at > org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:147) > > at > org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40) > > at > org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:117) > > at org.mortbay.jetty.Server.doStart(Server.java:210) > > at > org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40) > > at org.mortbay.xml.XmlConfiguration.main(XmlConfiguration.java:929) > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > > at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) > > at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) > > at java.lang.reflect.Method.invoke(Unknown Source) > > at org.mortbay.start.Main.invokeMain(Main.java:183) > > at org.mortbay.start.Main.start(Main.java:497) > > at org.mortbay.start.Main.main(Main.java:115) > > Caused by: java.lang.ClassNotFoundException: > > org.apache.solr.spatial.tier.SpatialTierQueryParserPlugin > > at java.net.URLClassLoader$1.run(Unknown Source) > > at java.security.AccessController.doPrivileged(Native Method) > > at java.net.URLClassLoader.findClass(Unknown Source) > > at java.lang.ClassLoader.loadClass(Unknown Source) > > at java.net.FactoryURLClassLoader.loadClass(Unknown Source) > > at java.lang.ClassLoader.loadClass(Unknown Source) > > at java.lang.Class.forN
Re: SpatialTierQueryParserPlugin Loading Error
Hi Adam, I cut a branch at Github of a forked Solr 1.5 that applied a bunch of patches that my student and I did in my CSCI 572 class at USC. The branch is here [1]. You can simply use Git to go grab that if you want and not worry about the patches either. I sent an email [2] and filed a bunch of JIRA issues (referenced in that email) but the patches weren't really applied hence the Git fork. Also if you are interested in just spatial stuff, check out Apache SIS [3], a project where we're trying to replicate functionality from e.g., a Geotools or JTS, but with an ASLv2 license. We're not specifically focused on spatial search in SIS, but it is definitely one of the areas. HTH! Cheers, Chris [1] https://github.com/chrismattmann/solrcene [2] http://s.apache.org/JI6 [3] http://incubator.apache.org/sis/ On Dec 28, 2010, at 5:54 PM, Adam Estrada wrote: > Hi Grant, > > I grabbed the latest version from trunk this morning and am still unable to > get any of the spatial functionality to work. I still seem to be getting the > class loading errors that I was getting when using the patches and jar files > I found all over the web. What I really need at this point is an example of > solrconfig.xml and whatever else I need to include to make it work properly. > I am using the Geonames DB with valid lat/longs in decimal degrees so I'm > confident that the data are correct. I have tried several examples all with > the same results. > > There are other patches like the following that show snippets of how to > modify the solrconfig file but there is no definitive source... > > https://issues.apache.org/jira/secure/attachment/12452781/SOLR-2077.Quach.Mattmann.082210.patch.txt > > I would gladly update this page if I could just get it working. > http://wiki.apache.org/solr/SpatialSearch > > w/r, > Adam > > > On Tue, Dec 14, 2010 at 9:04 AM, Grant Ingersoll wrote: > >> For this functionality, you are probably better off using trunk or >> branch_3x. There are quite a few patches related to that particular one >> that you will need to apply in order to have it work correctly. >> >> >> On Dec 13, 2010, at 10:06 PM, Adam Estrada wrote: >> >>> All, >>> >>> Can anyone shed some light on this error. I can't seem to get this >>> class to load. I am using the distribution of Solr from Lucid >>> Imagination and the Spatial Plugin from here >>> https://issues.apache.org/jira/browse/SOLR-773. I don't know how to >>> apply a patch but the jar file is in there. What else can I do? >>> >>> org.apache.solr.common.SolrException: Error loading class >>> 'org.apache.solr.spatial.tier.SpatialTierQueryParserPlugin' >>> at >> org.apache.solr.core.SolrResourceLoader.findClass(SolrResourceLoader.java:373) >>> at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:413) >>> at >> org.apache.solr.core.SolrCore.createInitInstance(SolrCore.java:435) >>> at org.apache.solr.core.SolrCore.initPlugins(SolrCore.java:1498) >>> at org.apache.solr.core.SolrCore.initPlugins(SolrCore.java:1492) >>> at org.apache.solr.core.SolrCore.initPlugins(SolrCore.java:1525) >>> at org.apache.solr.core.SolrCore.initQParsers(SolrCore.java:1442) >>> at org.apache.solr.core.SolrCore.(SolrCore.java:548) >>> at >> org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:137) >>> at >> org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:83) >>> at >> org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:99) >>> at >> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40) >>> at >> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:594) >>> at org.mortbay.jetty.servlet.Context.startContext(Context.java:139) >>> at >> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1218) >>> at >> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:500) >>> at >> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:448) >>> at >> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40) >>> at >> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:147) >>> at >> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:161) >>> at >> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40) >>> at >> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:147) >>> at >> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40) >>> at >> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:117) >>> at org.mortbay.jetty.Server.doStart(Server.java:210) >>> at >> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40) >>> at org.mortbay.xml.XmlConfiguration.main(XmlConfiguration.java:929) >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Nati
Re: SpatialTierQueryParserPlugin Loading Error
On Tue, Dec 28, 2010 at 8:54 PM, Adam Estrada wrote: > I would gladly update this page if I could just get it working. > http://wiki.apache.org/solr/SpatialSearch Everything on that wiki page should work w/o patches on trunk. I just ran through all of the examples, and everything seemed to be working fine. -Yonik http://www.lucidimaignation.com
Re: SpatialTierQueryParserPlugin Loading Error
Thanks a bunch for all the great responses! I think first thing tomorrow I will grab a fresh version from trunk then walk through the tutorial. I have not done that in quite some time... I will also investigate the version in Git to see which one is easier to work with. I like the idea of building a JTS with a loose license which is what the advantage of the Apache license is, right? I suppose licensing is a whole other topic all together ;-) Thanks again for all the great support! Adam On Dec 28, 2010, at 9:29 PM, Yonik Seeley wrote: > On Tue, Dec 28, 2010 at 8:54 PM, Adam Estrada wrote: >> I would gladly update this page if I could just get it working. >> http://wiki.apache.org/solr/SpatialSearch > > Everything on that wiki page should work w/o patches on trunk. > I just ran through all of the examples, and everything seemed to be > working fine. > > -Yonik > http://www.lucidimaignation.com
Re: old index files not deleted on slave
We have master, slave setup. Every 5 minutes, index will be replicated from Master to Slave and will be installed on slave. But on Linux on the slave when the snapinstaller script is called, it is failing and showing below error in logs. /bin/rm: cannot remove `/ngs/app/esearcht/Slave2index/data/index/.nfs000111030749': Device or resource busy This error is occuring in snapinstaller script at below lines. cp -lr ${name}/ ${data_dir}/index.tmp$$ && \ /bin/rm -rf ${data_dir}/index && \ mv -f ${data_dir}/index.tmp$$ ${data_dir}/index It is not able to remove the index folder. So the index.tmp files are keep growing in the data directory. Our data directory is "/ngs/app/esearcht/Slave2index/data". When checked with ls -al in the index directory, there are some .nfs files still there, which are not letting index directory to be deleted. And these .nfs files are still being used by SOLR in jboss. This setup is giving issue only in linux. Is this known bug on linux? -- View this message in context: http://lucene.472066.n3.nabble.com/old-index-files-not-deleted-on-slave-tp2113493p2160924.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: old index files not deleted on slave
We are using Locktype "single". -- View this message in context: http://lucene.472066.n3.nabble.com/old-index-files-not-deleted-on-slave-tp2113493p2161030.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Dynamic column names using DIH
On Wed, Dec 29, 2010 at 2:59 AM, Mark wrote: > Is there a way to create dynamic column names using the values returned from > the query? > > For example: > > dataSource="my_database" > query="select * from foo where item_id=${item.id}"> > > Not sure if that syntax will work, but you can use a transformer to add a dynamic column, Please see http://wiki.apache.org/solr/DataImportHandler#Transformer Regards, Gora
dynamic fields revisited
Well, getting close to the time when the 'rubber meets the road'. A couple of questions about dynamic fields. A/ How much room in the index do 'non used' dynamic fields add per record, any? B/ Is the search done on the dynamic filed name in the schema, or on the name that was matched? C/ Anyone done something like: //schema file// (representative, not actual) *_int1 *_int2 *_int3 *_int4 *_datetime1 *_datetime2 . . Then have fields in the imported data (especially using a DIH importing from a VIEW) that have custom names like: //import source//(representative, not actual) custom_labelA_int1 custom_labelB_int2 custom_labelC_datetime1 custom_labelD_datetime2 Is this how dynamic fields are used? I was thinking of having approximately 1-20 dynamic fields per datatype of interest. D/ If I wanted all text based dynamic fields added to some common field in the index (sorry, bad terminology), how is that done? Dennis Gearon Signature Warning It is always a good idea to learn from your own mistakes. It is usually a better idea to learn from others’ mistakes, so you do not have to make them yourself. from 'http://blogs.techrepublic.com.com/security/?p=4501&tag=nl.e036' EARTH has a Right To Life, otherwise we all die.
Re: DIH and UTF-8
Hi Mark, Could you offer a more technical explanation of the Rails problem, so that if others encounter a similar problem your efforts in finding the issue will be available to them? :-) Thanks, Glen PS. This has wandered somewhat off-topic to this list: apologies & thanks for the patience of this list... On Tue, Dec 28, 2010 at 4:15 PM, Mark wrote: > It was due to the way I was writing to the DB using our rails application. > Everythin looked correct but when retrieving it using the JDBC driver it was > all managled. > > On 12/27/10 4:38 PM, Glen Newton wrote: >> >> Is it possible your browser is not set up to properly display the >> chinese characters? (I am assuming you are looking at things through >> your browser) >> Do you have any problems viewing other chinese documents properly in >> your browser? >> Using mysql, can you see these characters properly? >> >> What happens when you use curl or wget to get a document from solr and >> looking at it using something besides your browser? >> >> Yes, I am running out of ideas! :-) >> >> -Glen >> >> On Mon, Dec 27, 2010 at 7:22 PM, Mark wrote: >>> >>> Just like the user of that thread... i have my database, table, columns >>> and >>> system variables all set but it still doesnt work as expected. >>> >>> Server version: 5.0.67 Source distribution >>> >>> Type 'help;' or '\h' for help. Type '\c' to clear the buffer. >>> >>> mysql> SHOW VARIABLES LIKE 'collation%'; >>> +--+-+ >>> | Variable_name | Value | >>> +--+-+ >>> | collation_connection | utf8_general_ci | >>> | collation_database | utf8_general_ci | >>> | collation_server | utf8_general_ci | >>> +--+-+ >>> 3 rows in set (0.00 sec) >>> >>> mysql> SHOW VARIABLES LIKE 'character_set%'; >>> +--++ >>> | Variable_name | Value | >>> +--++ >>> | character_set_client | utf8 | >>> | character_set_connection | utf8 | >>> | character_set_database | utf8 | >>> | character_set_filesystem | binary | >>> | character_set_results | utf8 | >>> | character_set_server | utf8 | >>> | character_set_system | utf8 | >>> | character_sets_dir | /usr/local/mysql/share/mysql/charsets/ | >>> +--++ >>> 8 rows in set (0.00 sec) >>> >>> >>> Any other ideas? Thanks >>> >>> >>> On 12/27/10 3:23 PM, Glen Newton wrote: [client] > > default-character-set = utf8 > [mysql] > default-character-set=utf8 > [mysqld] > character_set_server = utf8 > character_set_client = utf8 >> >> > -- -