Re: Write into composite coulmns
First, this is a compound primary key, not a composite [partition] key. You don't have a column definition for dpci, your partition key. Compact storage requires precisely and exactly and only one non-primary key column, but you have two. Maybe dpci and item were supposed to be the same. -- Jack Krupansky On Fri, Jul 10, 2015 at 1:43 AM, Ajay Chander wrote: > Any one here came across a situation like this ? Thank you! > > On Thursday, July 9, 2015, Ajay Chander wrote: > > > More information: > > > > > > Below is my cassandra bolt. > > > > > > public CassandraTest withCassandraBolt() { > > > > String[] rowKeyFields = {“item","location”}; > > > > HashMap clientConfig = newHashMap(); > > > > clientConfig.put(StormCassandraConstants.CASSANDRA_HOST, > > > > this.configuration.getCassandraBoltServer()); > > > > clientConfig.put(StormCassandraConstants.CASSANDRA_KEYSPACE, Arrays > > > > .asList(new String[] {this.projectConfiguration > > > > .getCassandraBoltKeyspace() })); > > > > this.stormConfig.put( > > > > this.configuration.getCassandraBoltConfigKey(), > > > > clientConfig); > > > > cassandraBolt = new CassandraBatchingBolt( > > > > this.configuration.getCassandraBoltConfigKey(), > > > > new CompositeRowTupleMapper( > > > > this.configuration.getCassandraBoltKeyspace(), > > > > this.configuration > > > > .getCassandraBoltColumnFamily(), > > > > rowKeyFields)); > > > > cassandraBolt.setAckStrategy(AckStrategy.ACK_ON_WRITE); > > > > return this; > > > > } > > > > > > > > This is my table in Cassandra: > > > > > > CREATE TABLE store ( item text, location text, type text, > > PRIMARY KEY (dpci, location) ) WITH COMPACT STORAGE; > > > > > > > > Error I am getting is below: > > > > > > 15810 [batch-bolt-thread] WARN > > com.netflix.astyanax.connectionpool.impl.Slf4jConnectionPoolMonitorImpl - > > BadRequestException: [host=127.0.0.1(127.0.0.1):9160, latency=21(21), > > attempts=1]InvalidRequestException(why:Not enough bytes to read value of > > component 0) > > > > 15811 [batch-bolt-thread] ERROR > > com.hmsonline.storm.cassandra.bolt.CassandraBatchingBolt - Unable to > write > > batch. > > > > com.netflix.astyanax.connectionpool.exceptions.BadRequestException: > > BadRequestException: [host=127.0.0.1(127.0.0.1):9160, latency=21(21), > > attempts=1]InvalidRequestException(why:Not enough bytes to read value of > > component 0) > > > > at com.netflix.astyanax.thrift.ThriftConverter.ToConnectionPoolException( > > ThriftConverter.java:159) ~[astyanax-thrift-1.56.44.jar:na] > > > > at com.netflix.astyanax.thrift.AbstractOperationImpl.execute( > > AbstractOperationImpl.java:65) ~[astyanax-thrift-1.56.44.jar:na] > > > > at com.netflix.astyanax.thrift.AbstractOperationImpl.execute( > > AbstractOperationImpl.java:28) ~[astyanax-thrift-1.56.44.jar:na] > > > > at > > > com.netflix.astyanax.thrift.ThriftSyncConnectionFactoryImpl$ThriftConnection.execute( > > ThriftSyncConnectionFactoryImpl.java:151) > > ~[astyanax-thrift-1.56.44.jar:na] > > > > at > > > com.netflix.astyanax.connectionpool.impl.AbstractExecuteWithFailoverImpl.tryOperation( > > AbstractExecuteWithFailoverImpl.java:119) ~[astyanax-core-1.56.44.jar:na] > > > > at > > > com.netflix.astyanax.connectionpool.impl.AbstractHostPartitionConnectionPool.executeWithFailover( > > AbstractHostPartitionConnectionPool.java:338) > > ~[astyanax-core-1.56.44.jar:na] > > > > at com.netflix.astyanax.thrift.ThriftKeyspaceImpl.executeOperation( > > ThriftKeyspaceImpl.java:493) ~[astyanax-thrift-1.56.44.jar:na] > > > > at com.netflix.astyanax.thrift.ThriftKeyspaceImpl.access$000( > > ThriftKeyspaceImpl.java:79) ~[astyanax-thrift-1.56.44.jar:na] > > > > at com.netflix.astyanax.thrift.ThriftKeyspaceImpl$1.execute( > > ThriftKeyspaceImpl.java:123) ~[astyanax-thrift-1.56.44.jar:na] > > > > at com.hmsonline.storm.cassandra.client.AstyanaxClient.writeTuples( > > AstyanaxClient.java:417) ~[classes/:na] > > > > at com.hmsonline.storm.cassandra.bolt.CassandraBolt.writeTuples( > > CassandraBolt.java:67) ~[classes/:na] > > > > at com.hmsonline.storm.cassandra.bolt.CassandraBatchingBolt.executeBatch( > > CassandraBatchingBolt.java:49) ~[classes/:na] > > > > at > com.hmsonline.storm.cassandra.bolt.AbstractBatchingBolt$BatchThread.run( > > AbstractBatchingBolt.java:134) [classes/:na] > > > > On Thursday, July 9, 2015, Ajay Chander > > wrote: > > > >> Hi All, > >> > >> I am having hard time in writing data into composite columns in > >> Cassandra. Any one here used "CompositeRowTupleMapper" from > >> > >> > >> > https://github.com/hmsonline/storm-cassandra/blob/master/src/main/java/com/hmsonline/storm/cassandra/bolt/mapper/CompositeRowTupleMapper.java > >> > >> Is there any CompositeRowTupleMapperTest.java where I can find how it > can > >> be used. > >> > >> Any help is highly appreciated. > >> > >> Thank you, > >> Ajay > >> > >> > >> > >> On Thursday, July 9, 2015, Ajay Chander wrote: > >> > >>> Hi Everyone, > >>> > >>> I am using hmsonline/storm-Cassandra from git. I have a
Re: Write into composite coulmns
Thank you for your reply Jack! That's my mistake. This is the actual table CREATE TABLE store ( item text, location text, type text, PRIMARY KEY (item, location) ) WITH COMPACT STORAGE; And it gives the same error to me. Have been finding a way to solve it. But no luck so far. Thank you, Ajay On Friday, July 10, 2015, Jack Krupansky wrote: > First, this is a compound primary key, not a composite [partition] key. > > You don't have a column definition for dpci, your partition key. > > Compact storage requires precisely and exactly and only one non-primary key > column, but you have two. Maybe dpci and item were supposed to be the same. > > -- Jack Krupansky > > On Fri, Jul 10, 2015 at 1:43 AM, Ajay Chander > wrote: > > > Any one here came across a situation like this ? Thank you! > > > > On Thursday, July 9, 2015, Ajay Chander > wrote: > > > > > More information: > > > > > > > > > Below is my cassandra bolt. > > > > > > > > > public CassandraTest withCassandraBolt() { > > > > > > String[] rowKeyFields = {“item","location”}; > > > > > > HashMap clientConfig = newHashMap(); > > > > > > clientConfig.put(StormCassandraConstants.CASSANDRA_HOST, > > > > > > this.configuration.getCassandraBoltServer()); > > > > > > clientConfig.put(StormCassandraConstants.CASSANDRA_KEYSPACE, Arrays > > > > > > .asList(new String[] {this.projectConfiguration > > > > > > .getCassandraBoltKeyspace() })); > > > > > > this.stormConfig.put( > > > > > > this.configuration.getCassandraBoltConfigKey(), > > > > > > clientConfig); > > > > > > cassandraBolt = new CassandraBatchingBolt( > > > > > > this.configuration.getCassandraBoltConfigKey(), > > > > > > new CompositeRowTupleMapper( > > > > > > this.configuration.getCassandraBoltKeyspace(), > > > > > > this.configuration > > > > > > .getCassandraBoltColumnFamily(), > > > > > > rowKeyFields)); > > > > > > cassandraBolt.setAckStrategy(AckStrategy.ACK_ON_WRITE); > > > > > > return this; > > > > > > } > > > > > > > > > > > > This is my table in Cassandra: > > > > > > > > > CREATE TABLE store ( item text, location text, type text, > > > PRIMARY KEY (dpci, location) ) WITH COMPACT STORAGE; > > > > > > > > > > > > Error I am getting is below: > > > > > > > > > 15810 [batch-bolt-thread] WARN > > > > com.netflix.astyanax.connectionpool.impl.Slf4jConnectionPoolMonitorImpl - > > > BadRequestException: [host=127.0.0.1(127.0.0.1):9160, latency=21(21), > > > attempts=1]InvalidRequestException(why:Not enough bytes to read value > of > > > component 0) > > > > > > 15811 [batch-bolt-thread] ERROR > > > com.hmsonline.storm.cassandra.bolt.CassandraBatchingBolt - Unable to > > write > > > batch. > > > > > > com.netflix.astyanax.connectionpool.exceptions.BadRequestException: > > > BadRequestException: [host=127.0.0.1(127.0.0.1):9160, latency=21(21), > > > attempts=1]InvalidRequestException(why:Not enough bytes to read value > of > > > component 0) > > > > > > at > com.netflix.astyanax.thrift.ThriftConverter.ToConnectionPoolException( > > > ThriftConverter.java:159) ~[astyanax-thrift-1.56.44.jar:na] > > > > > > at com.netflix.astyanax.thrift.AbstractOperationImpl.execute( > > > AbstractOperationImpl.java:65) ~[astyanax-thrift-1.56.44.jar:na] > > > > > > at com.netflix.astyanax.thrift.AbstractOperationImpl.execute( > > > AbstractOperationImpl.java:28) ~[astyanax-thrift-1.56.44.jar:na] > > > > > > at > > > > > > com.netflix.astyanax.thrift.ThriftSyncConnectionFactoryImpl$ThriftConnection.execute( > > > ThriftSyncConnectionFactoryImpl.java:151) > > > ~[astyanax-thrift-1.56.44.jar:na] > > > > > > at > > > > > > com.netflix.astyanax.connectionpool.impl.AbstractExecuteWithFailoverImpl.tryOperation( > > > AbstractExecuteWithFailoverImpl.java:119) > ~[astyanax-core-1.56.44.jar:na] > > > > > > at > > > > > > com.netflix.astyanax.connectionpool.impl.AbstractHostPartitionConnectionPool.executeWithFailover( > > > AbstractHostPartitionConnectionPool.java:338) > > > ~[astyanax-core-1.56.44.jar:na] > > > > > > at com.netflix.astyanax.thrift.ThriftKeyspaceImpl.executeOperation( > > > ThriftKeyspaceImpl.java:493) ~[astyanax-thrift-1.56.44.jar:na] > > > > > > at com.netflix.astyanax.thrift.ThriftKeyspaceImpl.access$000( > > > ThriftKeyspaceImpl.java:79) ~[astyanax-thrift-1.56.44.jar:na] > > > > > > at com.netflix.astyanax.thrift.ThriftKeyspaceImpl$1.execute( > > > ThriftKeyspaceImpl.java:123) ~[astyanax-thrift-1.56.44.jar:na] > > > > > > at com.hmsonline.storm.cassandra.client.AstyanaxClient.writeTuples( > > > AstyanaxClient.java:417) ~[classes/:na] > > > > > > at com.hmsonline.storm.cassandra.bolt.CassandraBolt.writeTuples( > > > CassandraBolt.java:67) ~[classes/:na] > > > > > > at > com.hmsonline.storm.cassandra.bolt.CassandraBatchingBolt.executeBatch( > > > CassandraBatchingBolt.java:49) ~[classes/:na] > > > > > > at > > com.hmsonline.storm.cassandra.bolt.AbstractBatchingBolt$BatchThread.run( > > > AbstractBatchingBolt.java:134) [classes/:na] > > > > > > On Thursday, July 9, 2015, Ajay Chan