of Cassandra and is better on my performance.
>
> we are using it for fulltext search use case
>
> Regards
> Asit
>
> On Sun, Mar 22, 2015 at 12:14 PM, Mehak Mehta
> wrote:
>
>> Hi,
>>
>> On the basis of some suggestions, I tried using tuplejump for
&
the query only
> to the nodes owning the data:
>
> SELECT * FROM images.results1 WHERE image_caseid='mehak' AND lucene='{
> filter:{type:"boolean", must:[
> {field:"x", type:"range", lower:100},
> {field:"y", type:&q
412787034156cbb#file-cassandra-install-sh-L42
>>
>> Lines 42 - 48 list a few settings that you could try out for increasing /
>> reducing the memory limits (assuming you're on linux).
>>
>> Also, are you using an SSD? If so make sure the IO scheduler is no
Akhtar wrote:
> What's your memory / CPU usage at? And how much ram + cpu do you have on
> this server?
>
>
>
> On Wed, Mar 18, 2015 at 2:31 PM, Mehak Mehta
> wrote:
>
>> Currently there is only single node which I am calling directly with
>> around 15 r
a server, make
> sure the nodes are up.
>
> 2) Are you calling this on the same server where cassandra is running? Its
> trying to connect to localhost . If you're running it on a different
> server, try passing in the direct ip of your cassandra server.
>
> On Wed, Mar 1
t; If you want to stick to cassandra, you might have better luck if you made
> your range columns part of the primary key, so something like PRIMARY
> KEY(caseId, x, y)
>
> On Wed, Mar 18, 2015 at 1:41 PM, Mehak Mehta
> wrote:
>
>> The rendering tool renders a portion a
tch
>> sizes, I've also run into these timeouts but reducing the batch size to 2k
>> seemed to work for me.
>>
>> On Wed, Mar 18, 2015 at 1:24 PM, Mehak Mehta
>> wrote:
>>
>>> We have UI interface which needs this data for rendering.
>>> So
000? For 1m rows, it seems
> like the difference would only be a few minutes. Do you have to do this all
> the time, or only once in a while?
>
> On Wed, Mar 18, 2015 at 12:34 PM, Mehak Mehta
> wrote:
>
>> yes it works for 1000 but not more than that.
>> How can I fetc
yes it works for 1000 but not more than that.
How can I fetch all rows using this efficiently?
On Wed, Mar 18, 2015 at 3:29 AM, Ali Akhtar wrote:
> Have you tried a smaller fetch size, such as 5k - 2k ?
>
> On Wed, Mar 18, 2015 at 12:22 PM, Mehak Mehta
> wrote:
>
>> Hi Jen
:
> Hi,
>
> Try setting fetchsize before querying. Assuming you don't set it too high,
> and you don't have too many tombstones, that should do it.
>
> Cheers,
> Jens
>
> –
> Skickat från Mailbox <https://www.dropbox.com/mailbox>
>
>
> On Wed,
Hi,
I have requirement to fetch million row as result of my query which is
giving timeout errors.
I am fetching results by selecting clustering columns, then why the queries
are taking so long. I can change the timeout settings but I need the data
to fetched faster as per my requirement.
My table
need for
> any filtering. What did you really mean to do? It makes no sense the way
> you have it!
>
> Either go with DSE Search/Solr, or google "Tuplejump Stargate" or
> "Stratio".
>
> -- Jack Krupansky
>
> On Tue, Mar 17, 2015 at 4:51 PM, Mehak Meh
8日,上午2:11,Jack Krupansky 写道:
>
> 1. Create multiple secondary indexes, one for each non-key column you need
> to index on. Not recommended. Considered an anti-pattern for Cassandra.
> 2. Use DSE Search/Solr.
> 3. Use Lucene-based indexing with TumpleJump/Stargate or Stratio.
>
Hi,
I want to perform range queries (as in x and y ranges) on a large data
billions of rows.
CQL allows me to put Non EQ restrictions on only one of the clustering
columns.
Its not allowing me to filter the data using any other column even with use
of Allow Filtering option.
cqlsh:images> *select
14 matches
Mail list logo