Comienza Ya. Es Gratis
ó regístrate con tu dirección de correo electrónico
Cassandra por Mind Map: Cassandra

1. data model

1.1. relationships

1.1.1. a cluster is a container for keyspaces

1.1.2. a keyspace is a container for column families

1.1.2.1. like database

1.1.3. a column family is a container for ordered rows

1.1.3.1. like tables

1.1.4. each row contains ordered columns

1.2. clusters

1.2.1. def

1.2.1.1. outermost structure: ring

1.2.2. a node holds a replica for different ranges of data

1.2.3. replication factor

1.3. keyspaces

1.3.1. def

1.3.1.1. outermost container for data

1.3.2. consists of

1.3.2.1. name

1.3.2.2. keyspace-wide attributes

1.3.2.2.1. replication factor

1.3.2.2.2. replica placement strategy

1.3.2.2.3. column families

1.4. column family

1.4.1. like a four dimensional hash

1.4.1.1. Untitled

1.4.2. like but not a relational table

1.4.2.1. cassandra is schema free

1.4.2.1.1. although column families are defined, columns are not

1.4.2.1.2. can freely add any column to any column family at any time

1.4.2.2. column family has two attributes

1.4.2.2.1. name

1.4.2.2.2. comparator

1.4.2.3. storage

1.4.2.3.1. column families are each stored in separate files on disk

1.4.2.3.2. in RDBMS, transparent to user how tables are stored on disk

1.4.2.4. write data to column family

1.4.2.4.1. you specify values for one or more columns

1.5. column

1.5.1. a triplet of a name, a value and a timestamp

1.5.1.1. Untitled

1.5.2. not a column from the relational world

1.5.2.1. size of rows

1.5.2.1.1. wide rows

1.5.2.1.2. skinny rows

1.5.3. super columns

1.5.3.1. a special kind of column

1.5.3.2. the value is a map of sub columns

1.5.3.2.1. Untitled

1.5.3.2.2. super column idea goes only one level deep

2. RDBMS vs Cassandra

2.1. how to switch from RDBMS to Cassandra

2.1.1. start with queries

2.1.1.1. then model data

2.1.2. supply a timestamp with each query

2.2. Design differences between RDBMS and Cassandra

2.2.1. No query language

2.2.2. how secondary indexes are handled

2.2.3. Sorting is a design decision

2.2.3.1. RDBMS - order by

2.2.3.2. Cassandra: column family's CompareWith element

2.2.4. denormalization

2.2.4.1. cassandra performs best when the data model is denormalized

2.2.5. No referential integrity

2.2.5.1. in RDMBS, could specify foreign keys in a table to reference the primary key of a record in another table

2.2.5.2. operations such as cascading deletes are not available

3. architecture

3.1. features

3.1.1. high availability

3.1.2. no single point of failure

3.1.3. inspired by amazon dynamoDB

3.1.4. developed in Facebook using Java

3.2. architecture

3.2.1. Untitled

3.2.1.1. Ring

3.2.1.1.1. in a cassandra cluster, data is assigned to nodes as if they form a ring of tokens

3.2.1.2. partitioner

3.2.1.2.1. a hashing algorithm to determine how data is distributed across the cluster

3.2.1.2.2. by default murmur3 hashing algorithm is being used

3.2.2. peer-to-peer

3.2.2.1. master-slave

3.2.2.1.1. optimized for reading data

3.2.2.1.2. but replication is one-way, from master to slave

3.2.2.1.3. capacity depends on master

3.2.2.1.4. even backup master might fail

3.2.2.2. any given node is structurally identical to any other node

3.2.3. gossip and failure detection

3.2.3.1. gossip protocol

3.2.3.1.1. used for failure detection

3.2.3.1.2. gossiper

3.2.3.2. accurual failure detection

3.2.3.2.1. failure detection should be flexible

3.2.3.2.2. heartbeats

3.2.4. tunable consistency

3.2.4.1. cap theorem

3.2.4.1.1. Untitled

3.2.4.1.2. when network latency is really good, you can get three all together

3.2.4.2. def

3.2.4.2.1. whether read always return the most recently written value

3.2.4.2.2. in other systems, the consistency level is defined by the protocol

3.3. internal data storage

3.3.1. overview

3.3.1.1. Untitled

3.3.2. commit log

3.3.2.1. crash-recovery mechanism that supports Cassandra's durability goals

3.3.2.2. a write will not count as successful until it is written to the commit log

3.3.2.3. after written to the commit log, value is written to a memory-resident data structured called memtable

3.3.2.4. when the number of objects stored in meltable reaches a threshold

3.3.2.4.1. the contents of memtable are flushed to disk in a file called SSTable

3.3.2.4.2. each SSTable has an associated bloom filter

3.3.2.5. a new memtable is then created

3.3.3. memtable

3.3.3.1. value will be added to meltable after commit log

3.3.3.2. in memory store to speed up operations

3.3.4. SSTable

3.3.4.1. content of memtable gets written to SSTable after memtable is full

3.3.4.2. immutable cannot be changed

3.3.4.3. changes are appended

3.3.4.4. sequential write to disk

3.3.5. compaction

3.3.5.1. merge of SStables

3.3.5.2. new merged data is sorted as well

3.3.5.3. reduce number of seeks

3.3.6. read/write operations inside cassandra

3.3.6.1. client can contact any node to read

3.3.6.2. the node becomes coordinator

3.3.6.2.1. read/write within a data center

3.3.6.2.2. read/write across data center

3.3.6.3. Untitled

3.3.7. data replication

3.4. Anti-Entroy and read repair

3.4.1. anti-entropy

3.4.1.1. replication synchronization mechanism

3.4.1.2. used in Amazon's Dynamo

3.4.1.2.1. merkle tree

3.4.1.3. cassandra

3.4.1.3.1. each column family has its merkle tree

3.4.1.4. after each update, the anti-entry algorithm kicks in

3.4.1.4.1. performs a checksum against database and peers

3.4.1.4.2. if checksums differ

3.4.2. read repair

3.4.2.1. to read

3.4.2.1.1. a client connects to any node in the cluster

3.4.2.1.2. based on the consistency level specified, a number of nodes are read

3.4.2.1.3. the read operation blocks until client-specified consistency level is met

3.4.2.1.4. if it is detected that some of the nodes responded with an out-of-date value

3.4.2.2. performance improvement

3.4.2.2.1. client does not block until all nodes are read

3.4.2.2.2. if having lots of clients, important to read from a quorum of nodes to ensure at least one will have the most recent value

3.5. System Keyspace

3.5.1. it uses to store metadata about the cluster to aid in operations

3.5.2. stores metadata for the local node as well as hinted handoff information

4. use case

4.1. general data storage

4.1.1. use cassandra cluster as your primary data persistency layer

4.2. time-series data storage

4.2.1. data is sorted and written sequentially to disk

4.2.2. perfect for retrieving data and filter range

4.2.3. fast access due to small disk seeks

4.3. ttl data storage

4.3.1. some data can be discarded after some time

4.3.2. with cassandra TTL on data, this feature is easy to implement