No, I dont use Yarn. This is standalone spark that comes with DataStax
Enterprise version of Cassandra.
On Thu, Oct 26, 2017 at 11:22 PM, Jörn Franke wrote:
> Do you use yarn ? Then you need to configure the queues with the right
> scheduler and method.
>
> On 27. Oct 2017, at 08
Hi,
I am using spark 1.6. I wrote a custom receiver to read from WebSocket. But
when I start my spark job, it connects to the WebSocket but doesn't get
any message. Same code, if I write as separate scala class, it works and
prints messages from WebSocket. Is anything missing in my Spark Code? Th
as adjusting memoryFraction will only be a stopgap. both shuffle and
> storage memoryFractions default to 0.6
>
> I have set above parameters to 0.5. Does it need to increased?
Thanks.
> On Wed, Jun 15, 2016 at 9:37 PM, Cassa L wrote:
>
>> Hi,
>> I did set --driver
; How does your "I am reading data from Kafka into Spark and writing it
>> into Cassandra after processing it." pipeline look like?
>>
>> Pozdrawiam,
>> Jacek Laskowski
>>
>> https://medium.com/@jaceklaskowski/
>> Mastering Apache Spark http:
> https://medium.com/@jaceklaskowski/
> Mastering Apache Spark http://bit.ly/mastering-apache-spark
> Follow me at https://twitter.com/jaceklaskowski
>
>
> On Mon, Jun 13, 2016 at 11:56 PM, Cassa L wrote:
> > Hi,
> >
> > I'm using spark 1.5.1 version. I am
n option,
>> probably worth a try.
>>
>> Cheers
>> Ben
>>
>> On Wed, 15 Jun 2016 at 08:48 Cassa L wrote:
>>
>>> Hi,
>>> I would appreciate any clue on this. It has become a bottleneck for our
>>> spark job.
>>>
>>>
t; Cheers
>> Ben
>>
>> On Wed, 15 Jun 2016 at 08:48 Cassa L wrote:
>>
>>> Hi,
>>> I would appreciate any clue on this. It has become a bottleneck for our
>>> spark job.
>>>
>>> On Mon, Jun 13, 2016 at 2:56 PM, Cassa L wrote:
Hi,
I would appreciate any clue on this. It has become a bottleneck for our
spark job.
On Mon, Jun 13, 2016 at 2:56 PM, Cassa L wrote:
> Hi,
>
> I'm using spark 1.5.1 version. I am reading data from Kafka into Spark and
> writing it into Cassandra after processing it. Spark jo
Hi,
I'm using spark 1.5.1 version. I am reading data from Kafka into Spark
and writing it into Cassandra after processing it. Spark job starts
fine and runs all good for some time until I start getting below
errors. Once these errors come, job start to lag behind and I see that
job has scheduling
nnector with 1.4.x). The main trick is in
> lining up all the versions and building an appropriate connector jar.
>
>
>
> Cheers
>
> Ben
>
>
>
> On Wed, 18 May 2016 at 15:40 Cassa L wrote:
>
> Hi,
>
> I followed instructions to run SparkShell with Spark-1.6. It
upport page will probably
> give you a good steer to get started even if you’re not using Instaclustr:
> https://support.instaclustr.com/hc/en-us/articles/213097877-Getting-Started-with-Instaclustr-Spark-Cassandra-
>
>
>
> Cheers
>
> Ben
>
>
>
> On Tue, 10 May
Hi,
Has anyone tried accessing Cassandra data using SparkShell? How do you do
it? Can you use HiveContext for Cassandra data? I'm using community version
of Cassandra-3.0
Thanks,
LCassa
Hi,
Has anyone used Protobuff with spark-cassandra connector? I am using
protobuff-3.0-beta with spark-1.4 and cassandra-connector-2.10. I keep
getting "Unable to find proto buffer class" in my code. I checked version
of protobuff jar and it is loaded with 3.0-beta in classpath. Protobuff is
comin
Thank you all for the responses to this thread. I am planning to use
Cassandra 1.1.9 with Astynax. Does anyone has Cassandra 1.x version running
in production with astynax? Did you come across any show-stopper issues?
Thanks
LCassa
On Thu, Feb 7, 2013 at 8:50 AM, Bartłomiej Romański wrote:
>
Hi,
Has anyone used Netflix/astynax java client library for Cassandra? I have
used Hector before and would like to evaluate astynax. Not sure, how it is
accepted in Cassandra community. Any issues with it or advantagest? API
looks very clean and simple compare to Hector. Has anyone used it in
prod
> Now it is true that it could be a shame to interrupt a compaction that have
> been running for a long time and is about to finish (so typically not one
> that
> has just been triggered by your drain), but you can always check the
> compaction manager in JMX to see if it's the case before killing
re split-brain
> for a while.
>
> /***
> sent from my android...please pardon occasional typos as I respond @ the
> speed of thought
> /
>
> On Oct 10, 2011 10:09 PM, "Cassa L" wrote:
>
> I am trying to understand mu
I am trying to understand multi DC setup for cassandra. As I understand, in
this setup, replicas exists in same cluster ring, but physically nodes are
distributed across DCs. Is this correct?
I have two different cluster rings in two DCs, and want to replicate data
bidirectionally. They both have
Hi,
I want to transfer data from a ring which is on 0.7.4 to the separate ring
running on 0.8. This ring does not even have schema definition of the data
available on 0.7.4. What is the best way to copy data and schema from 0.7
cluster to 0.8. Do I need to define schema manually and then copy ssT
uot;. I am wondering what why
this dead/up pattern is occurring at Gossip.
Thanks in advance,
Cassa L.
20 matches
Mail list logo