Can you specify what do you mean by 'problem'? I don't think there should
be any issues with that.
Hope this is what you followed in your attempt so far:
http://wiki.apache.org/solr/SolrCloud#Example_B:_Simple_two_shard_cluster_with_shard_replicas
On Thu, Sep 12, 2013 at 11:31 AM, Prasi S wrot
Followup: I just tried modifying the select with
select CAST('APPLICATION' as varchar2(100)) as sourceid, ...
and that caused the sourceid field to be empty. CASTing to char(100) gave
me the expected value ('APPLICATION', right-padded to 100 characters).
Meanwhile, google gave me this: http://bu
could it have something to do with the meta encoding tag is iso-8859-1 but the
http-header tag is utf8 and firefox inteprets it as utf8?
On 12. Sep 2013, at 8:36 AM, Andreas Owen wrote:
> no jetty, and yes for tomcat i've seen a couple of answers
>
> On 12. Sep 2013, at 3:12 AM, Otis Gospodneti
This is probably a bug with Oracle thin JDBC driver. Google found a
similar issue:
http://stackoverflow.com/questions/4168494/resultset-getstring-on-varchar2-column-returns-empty-string
I don't think this is specific to DataImportHandler.
On Thu, Sep 12, 2013 at 12:43 PM, Raymond Wiker wrote:
>
Maybe the fact that we are never ever going to delete or update
documents, can be used for something. If we delete we will delete entire
collections.
Regards, Per Steffensen
On 9/12/13 8:25 AM, Per Steffensen wrote:
Hi
SolrCloud 4.0: 6 machines, quadcore, 8GB ram, 1T disk, one Solr-node
on
Seems like the attachments didnt make it through to this mailing list
https://dl.dropboxusercontent.com/u/25718039/doccount.png
https://dl.dropboxusercontent.com/u/25718039/iowait.png
On 9/12/13 8:25 AM, Per Steffensen wrote:
Hi
SolrCloud 4.0: 6 machines, quadcore, 8GB ram, 1T disk, one Solr-
Hi
I tried to reindex the solr. I get the regular expression problem. The
steps I followed are
I started the java -jar start.jar
http://localhost:8983/solr/update?stream.body=
*:*
http://localhost:8983/solr/update?stream.body=
I stopped the solr server
I changed indexed and stored tags as false
Hi,
My Question is related to OpenNLP Integration with SOLR.
I have successfully applied OpenNLP LUCENE-2899-x.patch to latest solr
branch checkout from here:
http://svn.apache.org/repos/asf/lucene/dev/branches/branch_4x
And also iam able to compile source code, generated all realted binaries
hi all.
I am trying solr cloud on my server. The server is a virtual machine.
I have followed solr cloude wiki " http://wiki.apache.org/solr/SolrCloud ".
When I run solr Cloud, It si failed. But If I try on my local ,it runs
successfully. Why does solr behave differently on server and local?
My
Fewer client threads updating makes sense, and going to 1 core also seems
like it might help. But it's all a crap-shoot unless the underlying cause
gets fixed up. Both would improve things, but you'll still hit the problem
sometime, probably when doing a demo for your boss ;).
Adrien has branched
Per:
One thing I'll be curious about. From my reading of DocValues, it uses
little or no heap. But it _will_ use memory from the OS if I followed
Simon's slides correctly. So I wonder if you'll hit swapping issues...
Which are better than OOMs, certainly...
Thanks,
Erick
On Thu, Sep 12, 2013 at
Hi,
I am also seeing this issue when the search query is something like "how
are you?" (Quotes for clarity).
The query parser splits it to the below tokens:
+text:whats +text:your +text:raashee?
However when I remove the "?" from the search query "how are you" I get the
results.
Is "?" a special
You must specify maxShardsPerNode=3 for this to happen. By default
maxShardsPerNode defaults to 1 so only one shard is created per node.
On Thu, Sep 12, 2013 at 3:19 AM, Aditya Sakhuja
wrote:
> Hi -
>
> I am trying to set the 3 shards and 3 replicas for my solrcloud deployment
> with 3 servers, s
That sounds reasonable. I've done some more digging, and found that the
database instance in this case is an _OLD_ version of Oracle: 9.2.0.8.0. I
also tried using the OCI driver (version 12), which refuses to even talk to
this database.
I have three other databases running on more recent versions
Thanks. It'd be great if you can update this thread if you ever find a
workaround. We will document it on the DataImportHandlerFaq wiki page.
http://wiki.apache.org/solr/DataImportHandlerFaq
On Thu, Sep 12, 2013 at 4:56 PM, Raymond Wiker wrote:
> That sounds reasonable. I've done some more diggi
Yes, thanks.
Actually some months back I made PoC of a FieldCache that could expand
beyond the heap. Basically imagine a FieldCache with room for
"unlimited" data-arrays, that just behind the scenes goes to
memory-mapped files when there is no more room on heap. Never finished
it, and it migh
Question mark and asterisk are wildcard characters, so if you want them to
be treated as punctuation, either enclose the terms in quotes or escape the
characters.
Wildcard characters suppress the execution of some token filters if they are
not able to cope with wildcards.
-- Jack Krupansky
On Thu, 2013-09-12 at 14:48 +0200, Per Steffensen wrote:
> Actually some months back I made PoC of a FieldCache that could expand
> beyond the heap. Basically imagine a FieldCache with room for
> "unlimited" data-arrays, that just behind the scenes goes to
> memory-mapped files when there is no
On 9/12/13 3:28 PM, Toke Eskildsen wrote:
On Thu, 2013-09-12 at 14:48 +0200, Per Steffensen wrote:
Actually some months back I made PoC of a FieldCache that could expand
beyond the heap. Basically imagine a FieldCache with room for
"unlimited" data-arrays, that just behind the scenes goes to
mem
Hi,
I got a small issue here, my facet settings are returning counts for empty
"". I.e. when no the actual field was empty.
Here are the facet settings:
count
6
1
false
and this is the part of the result I dont want:
4
(that is coming because the query results had 4 rows with no value in that
My problem is solved. My server default java version is 1.5 . I upgrade
java version.
2013/9/12 cihat güzel
> hi all.
> I am trying solr cloud on my server. The server is a virtual machine.
>
> I have followed solr cloude wiki " http://wiki.apache.org/solr/SolrCloud
> ".
> When I run solr Clou
My problem is solved. My server default java version is 1.5 . I upgrade
java version.
2013/9/12 cihat güzel
> hi all.
> I am trying solr cloud on my server. The server is a virtual machine.
>
> I have followed solr cloude wiki " http://wiki.apache.org/solr/SolrCloud
> ".
> When I run solr Clou
On 9/12/2013 2:14 AM, Per Steffensen wrote:
>> Starting from an empty collection. Things are fine wrt
>> storing/indexing speed for the first two-three hours (100M docs per
>> hour), then speed goes down dramatically, to an, for us, unacceptable
>> level (max 10M per hour). At the same time as spee
Neoman,
Make sure that solr08-prod (or the elected leader at any time) isn't doing a
stop-the-world garbage collection that takes long enough that the zookeeper
connection times out. I've seen that in my cluster when I didn't have parallel
GC enabled and my "zkClientTimeout" in solr.xml was too
Thanks greg. Currently we have 60 seconds (we reduced it recently). I may
have to reduce it again. can you please share your timeout value.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-cloud-shard-goes-down-after-SocketException-in-another-shard-tp4089576p4089582.htm
Neoman,
I've got ours set at 45 seconds:
${zkClientTimeout:45000}
-Original Message-
From: neoman [mailto:harira...@gmail.com]
Sent: Thursday, September 12, 2013 9:33 AM
To: solr-user@lucene.apache.org
Subject: Re: Solr cloud shard goes down after SocketException in another shard
Than
Exception in shard1 (solr01-prod) primary
<09/12/13
13:56:46:635|http-bio-8080-exec-66|ERROR|apache.solr.servlet.SolrDispatchFilter|null:ClientAbortException:
java.net.SocketException: Broken pipe
at
org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:406)
Hi Jack,
On Sep 11, 2013, at 5:34pm, Jack Krupansky wrote:
> Do a copyField to another field, with a limit of 8 characters, and then use
> that other field.
Thanks - I should have included a few more details in my original question.
The issue is that I've got an index with 200M records, of whi
On 9/12/2013 7:54 AM, Raheel Hasan wrote:
> I got a small issue here, my facet settings are returning counts for empty
> "". I.e. when no the actual field was empty.
>
> Here are the facet settings:
>
> count
> 6
> 1
> false
>
> and this is the part of the result I dont want:
> 4
The "facet.mis
ok, so I got the idea... I will pull 7 fields instead and remove the empty
one...
But there must be some setting that can be done in Facet configuration to
ignore certain value if we want to
On Thu, Sep 12, 2013 at 7:44 PM, Shawn Heisey wrote:
> On 9/12/2013 7:54 AM, Raheel Hasan wrote:
>
Slow down, back up, and now tell us what problem (if any!) you are really
trying to solve. Don't leap to a proposed solution before you clearly state
the problem to be solved.
First, why do you think there is any problem at all?
Or, what are you really trying to achieve?
-- Jack Krupansky
--
it was the http-header, as soon as i force a iso-8859-1 header it worked
On 12. Sep 2013, at 9:44 AM, Andreas Owen wrote:
> could it have something to do with the meta encoding tag is iso-8859-1 but
> the http-header tag is utf8 and firefox inteprets it as utf8?
>
> On 12. Sep 2013, at 8:36 AM,
Hi Jack,
Sorry, I was not clear earlier. What I'm trying to achieve is :
I want to know when a document is committed (hard commit). There can be a
lot of time lapse (1 hour or more) between the time you indexed that
document vs you issue a commit in my case. Now, I exactly want to know when
a d
Lol, at breaking during a demo - always the way it is! :) I agree, we are
just tip-toeing around the issue, but waiting for 4.5 is definitely an
option if we "get-by" for now in testing; patched Solr versions seem to
make people uneasy sometimes :).
Seeing there seems to be some danger to SOLR-521
Right, I don't see SOLR-5232 making 4.5 unfortunately. It could perhaps make a
4.5.1 - it does resolve a critical issue - but 4.5 is in motion and SOLR-5232
is not quite ready - we need some testing.
- Mark
On Sep 12, 2013, at 2:12 PM, Erick Erickson wrote:
> My take on it is this, assuming I
Sorry, but all you've done is reshuffle your previous statements but without
telling us about the actual problem that you are trying to solve!
Repeating myself: You, the application developer can send a hard commit any
time you want to assure that documents are searchable. Maybe not every
mill
On Sep 12, 2013, at 20:55 , phanichaitanya wrote:
> Apologies again. But here is another try :
>
> I want to make sure that documents that are indexed are committed in say an
> hour. I agree that if you pass commitWithIn params and the like will make
> sure of that based on the time configuration
On 9/12/2013 12:55 PM, phanichaitanya wrote:
> I want to make sure that documents that are indexed are committed in say an
> hour. I agree that if you pass commitWithIn params and the like will make
> sure of that based on the time configurations we set. But, I want to make
> sure that the document
So, now I want to know when that document becomes searchable or when it is
committed. I've the following scenario:
1) Indexing starts at say 9:00 AM - with the above additions to the
schema.xml I'll know the indexed time of each document I send to Solr via
the update handler. Say 9:01, 9:02 and so
My take on it is this, assuming I'm reading this right:
1> SOLR-5216 - probably not going anywhere, 5232 will take care of it.
2> SOLR-5232 - expected to fix the underlying issue no matter whether
you're using CloudSolrServer from SolrJ or sending lots of updates from
lots of clients.
3> SOLR-4816
On 9/12/2013 11:17 AM, Andreas Owen wrote:
> it was the http-header, as soon as i force a iso-8859-1 header it worked
Glad you found a workaround!
If you are in a situation where you cannot control the header of the
request or modify the content itself to include charset information, or
there's s
That makes sense, thanks Erick and Mark for you help! :)
I'll see if I can find a place to assist with the testing of SOLR-5232.
Cheers,
Tim
On 12 September 2013 11:16, Mark Miller wrote:
> Right, I don't see SOLR-5232 making 4.5 unfortunately. It could perhaps
> make a 4.5.1 - it does reso
maxAnalyzedChars did it! I wasn't setting that param, and I'm working with
some very long documents. I also made the hl.fl param formatting change that
you suggested, Aloke.
Thanks again!
- Eric
On Sep 11, 2013, at 3:10 AM, Eric O'Hanlon wrote:
> Thank you, Aloke and Bryan! I'll give this
On 9/12/2013 11:04 AM, phanichaitanya wrote:
> So, now I want to know when that document becomes searchable or when it is
> committed. I've the following scenario:
>
> 1) Indexing starts at say 9:00 AM - with the above additions to the
> schema.xml I'll know the indexed time of each document I sen
Yes, the document will be searchable after it is committed.
Although you can also do auto commits and commitWithin which do not
guarantee immediate visibility of index changes, you can do a hard commit
any time you want to make a document searchable.
-- Jack Krupansky
-Original Message--
I'd like to know when a document is committed in Solr vs. the indexed time.
For indexed time, I can add a field as : .
If I have say, 10 million docs indexed and I want to know the actual commit
time of the document which makes it searchable. The problem is to just find
the time when a document
Hi Prabu,
It's difficult to tell what's going wrong without the full exception stack
trace, including what the exception is.
If you can provide the specific input that triggers the exception, that might
also help.
Steve
On Sep 12, 2013, at 4:14 AM, prabu palanisamy wrote:
> Hi
>
> I tried
Hi,
I just have this issue came out of no where
Everything was fine until all of a sudden the browser cant connect to this
solr.
Here is the solr log:
INFO - 2013-09-12 20:07:58.142; org.eclipse.jetty.server.Server;
jetty-8.1.8.v20121106
INFO - 2013-09-12 20:07:58.179;
org.eclipse.jetty.d
While attempting to upgrade from Solr 4.3.0 to Solr 4.4.0 I ran into
this exception:
java.lang.IllegalArgumentException: enablePositionIncrements=false is
not supported anymore as of Lucene 4.4 as it can create broken token
streams
which led me to https://issues.apache.org/jira/browse/LUCENE-496
Solr admin exposes time of last commit. You can use that.
Otis
Solr & ElasticSearch Support
http://sematext.com/
On Sep 12, 2013 3:22 PM, "phanichaitanya" wrote:
> Apologies again. But here is another try :
>
> I want to make sure that documents that are indexed are committed in say an
> hour. I
I'm trying to get score by using a custom boost and also get the distance. I
found David's code* to get it using "Intersects", which I want to replace by
{!geofilt} or geodist()
*David's code: https://issues.apache.org/jira/browse/SOLR-4255
He told me geodist() will be available again for this ki
I really think this is the wrong approach.
bq: We do a commit on every update, but updates are very infrequent
I doubt this is actually true. You may think it is, but you just don't get
more than 8 warming searchers in the situation you describe. Fix the
_real_ problem here.
Do what Hoss said. L
Hi,
Any updates on this?. Is ranking computation dependent on the 'maxDoc'
value in the solr? Is this happening due to changing value of 'maxDoc'
value after each optimization. As in, in solr 4.4 every time optimization
is ran, the 'maxDoc' value is reset, where as this is not the case in solr
53 matches
Mail list logo