cqlsh - Cassandra Column Limit -
when using cassandra, in cqlsh
, type this:
cqlsh:info> select count(*) info.customerinfo key = 'ds10128832';
and got next results:
count ------- 10000 default limit of 10000 used. specify own limit clause more results.
basically want find how many columns stored in rowkey
ds10128832
.
does output means got 10000 columns stored in key , can not add together more columns since limit 10000? , more columns not inserted key if reaches 10000? if is, how can alter situation? must set limit
? because have lot of columns store not want have limit
.
cassandra terminology makes difference between partitions , rows. query result indicates there 10000 rows in partition key ds10128832.
actually, catpaws pointed out, there default limit of 10000, have more rows partition key. count rest, you'll need specify higher limit clause eg:
cqlsh:info> select count(*) info.customerinfo key = 'ds10128832' limit 100000;
you may need increment limit number upwards if find maintain hitting limit during query.
in question referred counting columns , i've answered rows. hope i'm not misunderstanding intent. internally, cassandra storing "rows" based on sorting keys columns (actually sets of columns), i'm assuming you're referring to. jargon in case important. catpaws mentioned there 2b column limit, includes sub columns based on sorting keys , rows contribute limitation. each of rows contribute number of actual (internal) columns equal number of values in schema not primary keys.
for illustration if table
create table info.customerinfo ( key text, business relationship text, email text, screenname text, primary key (key, account) );
then count above have counted number of "account" rows on partition key "ds10128832". each (key, account) combination unique logical row (internally) 2 columns: 1 email, 1 screenname. each customerinfo "key" hypothetically have 1b such accounts before hitting 2b limitation in columns imposed cassandra.
edit: hitting limit throw exception.
cassandra cqlsh
No comments:
Post a Comment