To get the schema.xml file, look at how Solr's admin/index.jsp fetches
it under the "Schema" button.
You cannot get a nice, cleanly parsed schema object tree from SolrJ.
On Tue, Jun 8, 2010 at 5:16 AM, Peter Karich wrote:
> Hi Raakhi,
>
> I am not sure if I understand your usecase correctly,
> b
Also, you may need this system property in your client app:
java -Dfile.encoding=utf-8 ..
On Tue, Jun 8, 2010 at 4:52 PM, jlist9 wrote:
> Ah, I didn't know this. This should be much simpler. Thank you very much!
>
> On Tue, Jun 8, 2010 at 12:57 AM, Ahmet Arslan wrote:
>>> Meanwhile, I'd lik
Ah, I didn't know this. This should be much simpler. Thank you very much!
On Tue, Jun 8, 2010 at 12:57 AM, Ahmet Arslan wrote:
>> Meanwhile, I'd like to try using POST, but I didn't find
>> information
>> about how to do this. Could someone point me to a link to
>> some
>> sample code?
>
>
> you
(10/06/09 7:36), Dragisa Krsmanovic wrote:
When we send the HTTP
response sometimes takes more than 60s and our client times out after
that. Whole operation takes 200+ seconds. Isn't waitFlush="false" and
waitSearcher="false" supposed to tell Solr to return response
immediately ?
INFO: start
co
On 6/8/10 3:36 PM, Dragisa Krsmanovic wrote:
When we send the HTTP
response sometimes takes more than 60s and our client times out after
that. Whole operation takes 200+ seconds. Isn't waitFlush="false" and
waitSearcher="false" supposed to tell Solr to return response
immediately ?
INFO: start
Hi Glen,
Thank you very much for the quick response, I would like to try increasing
the netTimoutForStreamingResults , is that something I can do it in the
MySQL side? or in the solr side?
Giri
On Tue, Jun 8, 2010 at 6:17 PM, Glen Newton wrote:
> As the index gets larger, the underlying housek
When we send the HTTP
response sometimes takes more than 60s and our client times out after
that. Whole operation takes 200+ seconds. Isn't waitFlush="false" and
waitSearcher="false" supposed to tell Solr to return response
immediately ?
INFO: start
commit(optimize=true,waitFlush=false,waitSearch
As the index gets larger, the underlying housekeeping of the Lucene
index sometimes causes pauses in the indexing. The JDBC connection
(and/or the underlying socket) to the MySql database can time out
during these pauses.
- If it is not set, you should add this to your JCBD url: autoreconnect=true
Hi Group,
I have been trying index about 70 million records in the solr index, the
data is coming from the MySQL database, and I am using the DataImportHandler
with batchSize set to -1. When I perform a full-import, it indexes about 27
million records then throws the following exception:
Any help
Hi folks,
We have a data cleanup effort going on here, and I thought I would
share some information about how to poke around your facet values.
Most of this comes from:
http://wiki.apache.org/solr/SimpleFacetParameters
Exploring Facet Values:
---
facet field to examine:
The following should work on centos/redhat, don't forget to edit the paths,
user, and java options for your environment. You can use chkconfig to add it
to your startup.
Note, this script assumes that the Solr webapp is configured using JNDI in a
tomcat context fragment. If not you will need to ad
So let me understand what you said. You went through the trouble to
implement a geospatial
solution using Solr 1.5, it worked really well. You saw no signs of
instability, but decided not to use it anyway?
Did you put it through a routine of tests and witness some stability
problem? Or just guessi
Andrew Clegg wrote:
>
> Re. your config, I don't see a minTokenLength in the wiki page for
> deduplication, is this a recent addition that's not documented yet?
>
Sorry about this -- stupid question -- I should have read back through the
thread and refreshed my memory.
--
View this message in
Neeb wrote:
>
> Just wondering if you ever managed to run TextProfileSignature based
> deduplication. I would appreciate it if you could send me the code
> fragment for it from solrconfig.
>
Actually the project that was for got postponed and I got distracted by
other things, for now at least
When I wanted to add some content to the solrj wiki for glassfish, I had a
problem in that their anti-spam measures broke the ability to create a new
account. Someone here (Chris I think) was kind enough to create a ticket in
the correct place:
https://issues.apache.org/jira/browse/INFRA-2726
2010/5/22 Noble Paul നോബിള് नोब्ळ् :
> just copy the dih-extras jar file from the nightly should be fine
Now that I've finally got a server on which to attempt to set these
things up... this turns out not to be a viable solution. The extras
jar does contain the TikaEntityProcessor class, but NOT
On Tue, Jun 8, 2010 at 11:00 AM, K Wong wrote:
> Okay. I've been running multicore Solr 1.4 on Tomcat 5.5/OpenJDK 6
> straight out of the centos repo and I've not had any issues. We're not
> doing anything wild and crazy with it though.
It's nice to know that the wiki's advice might be out of dat
Currently using solr 1.4,
I am looking for a way to conduct a solr search with a single query
and multiple locations. The goal is not to find the intersect of these
locations (so I can't just apply multiple filter queries) but to
return documents in rage 1 OR range 2.
I am currently working with
Hey Andrew,
Just wondering if you ever managed to run TextProfileSignature based
deduplication. I would appreciate it if you could send me the code fragment
for it from solrconfig.
I have currently something like this, but not sure if I am doing it right:
true
signature
Hi all,
We are about to test out various factors to try to speed up our indexing
process. One set of experiments will try various maxRamBufferSizeMB settings.
Since the factors we will be varying are at the Lucene level, we are
considering using the Lucene Benchmark utilities in Lucene/cont
Hi All,
I've been running some tests using 6 shards each one containing about 1
millions documents.
Each shard is running in its own virtual machine with 7 GB of ram (5GB
allocated to the JVM).
After about 1100 unique queries the shards start to struggle and run out of
memory. I've reduced all
The way I did it with SQL Server is like this:
Let's say you have a field called "Company" which is multivalued, then
you would declare it like this in schema.xml:
in your SQL query, you would do this:
select
table.field1
, (select distinct cast (c.ID AS VARCHAR(10)) + ','
from tab
How would I do a facet search if I did this and not get duplicates?
Thanks,
Moazzam
On Mon, Jun 7, 2010 at 10:07 AM, Israel Ekpo wrote:
> I think you need a 1:1 mapping between the consultant and the company, else
> how are you going to run your queries for let's say consultants that worked
> fo
For the record, I've been running one of our production Solr 1.4
installs under the Ubuntu 9.04 tomcat6 + OpenJDK. package, and haven't
run into difficulties yet.
On Tue, Jun 8, 2010 at 8:00 AM, K Wong wrote:
> Okay. I've been running multicore Solr 1.4 on Tomcat 5.5/OpenJDK 6
> straight out of t
Okay. I've been running multicore Solr 1.4 on Tomcat 5.5/OpenJDK 6
straight out of the centos repo and I've not had any issues. We're not
doing anything wild and crazy with it though.
K
On Tue, Jun 8, 2010 at 7:20 AM, Sixten Otto wrote:
> On Mon, Jun 7, 2010 at 9:23 PM, K Wong wrote:
>> Did y
I had read http://wiki.apache.org/solr/DataImportHandler but I didn't
realize that was for multivalued.
Than you very much!
On Tue, Jun 8, 2010 at 8:01 AM, Alexey Serba wrote:
> Hi Alberto,
>
> You can add child entity which returns multiple records, i.e.
>
>
>
>
>
>
> HTH,
> Alex
>
>
On Mon, Jun 7, 2010 at 9:23 PM, K Wong wrote:
> Did you install tomcat 5.5 from an RPM?
I did not, on the advice of that same Solr wiki article that manual
installation is "recommended because distribution Tomcats are either
old or quirky." There haven't been any issues with this, except that
the
Thanks. Macro.
That makes sense. Your words explained why distributed search get the
numFound but didn't return the result.
Looks I have to rebuild my indexes.
On Tue, Jun 8, 2010 at 9:33 PM, Marco Martinez <
mmarti...@paradigmatecnologico.com> wrote:
> Is there a way to let "ID" not be "indexe
Is there a way to let "ID" not be "indexed" in solr?
If i am not wrong, this is not possible if you want distributed searches,
because solr uses internaly the ids to retrieve the correct pagination in a
distributed search, i mean, when you do a distributed search (ie two
shards), two searches are
>It appears that the defType parameter is not being set by the request
>>handler.
What do you get when you append &echoParams=all to your search url?
So you have something like this entry in solrconfig.xml
myqp
Hi Raakhi,
I am not sure if I understand your usecase correctly,
but if you need this custom location to test against an
existing schema/config file I found this snippet [1].
Otherwise the solr home can be set with
-Dsolr.solr.home=/opt/solr/example"
more information is available here [2]
Rega
Hi. Markus.
Thanks for replying.
I figured out the reason this afternoon. Sorry for not following up on this
list. I posted it onto dev list because I think it is a BUG.
I finally know why it doesn't return the result.
Hi.
I have extended QParserPlugin according to the Solr Wiki and registered it in
solrconfig.xml.
Using the defType query parameter, it worked with my multi-core server, giving
285 hits for my search.
Next, I wanted it to be the default query parser, so I added to the standard
searchhandler
The
did you send a commit after the last doc posted to solr?
> -Ursprüngliche Nachricht-
> Von: Scott Zhang [mailto:macromars...@gmail.com]
> Gesendet: Dienstag, 8. Juni 2010 08:30
> An: solr-user@lucene.apache.org
> Betreff: Re: Distributed Search doesn't response the result set
>
> Hi. A
Hi,
i'm trying make the geonames.org query parser
(http://www.ibm.com/developerworks/opensource/library/j-spatial/index.html?ca=drs-)
work with the nightly solr build.
I've added to
/examples/solr/lib/ three jar-files (geonames*.jar, jdom*.jar,
spatial-ex.jar) and i've added the GeonamesQParse
Hi,
Is there any way to create an IndexSchema from a
CommonsHttpSolrServer. (where u don't know the location of schema.xml and
solrconfig.xml or those files are in some other machine.)
I tried looking for solrJ api's for the same. but coudn't find it.
or is there any way to retrieve sch
Recently I looked a bit at DataImportHandler and I'm really impressed with
the flexibility of transform / import options.
Especially with integrations with Solr Cell / Tika this has become a great
Data importer.
Besides some use-cases that import to Solr (which I plan to migrate to DIH
asap), DI
my bad, it looks like XPathEntityProcessor doesn't support relative xpaths.
However, I quickly looked at the Slashdot example (which is pretty good
actually) at http://wiki.apache.org/solr/DataImportHandler.
>From that I infer that you use only 1 entity per xml-doc. And within that
entity use mult
i used the 1.5 build a few weeks ago, implemented the geospatial
functionality and it worked really well.
however due to the unknown quantity in terms of stability (and the uncertain
future of 1.5) etc. we decided not to use it in production.
rob ganly
On 8 June 2010 03:50, Darren Govoni wrote:
> Meanwhile, I'd like to try using POST, but I didn't find
> information
> about how to do this. Could someone point me to a link to
> some
> sample code?
you can pass METHOD.POST to query method of SolrServer.
public QueryResponse query(SolrParams params, METHOD method)
I have tried both to change the datasource per child node to use the
parent nodes name, and tried to making the Xpath`s relative, all
causing either exceptions telling that Xpath must start with /, or
nullpointer exceptions ( nsfgrantsdir document : null).
Best regards
On Mon, Jun 7, 2010 at 4:12
I was using SolrQuery. Now I'm switching to QueryRequest.
Hope this works. Thanks!
On Mon, Jun 7, 2010 at 11:26 PM, jlist9 wrote:
> Thank you for the reply! I'm using Tomcat 6.0.20. I read the page.
> I think you meant setting URIEncoding for the connector:
>
>
> I tried this but it still doesn'
42 matches
Mail list logo