Get Started. It's Free
or sign up with your email address
BigData by Mind Map: BigData

1. Tools

1.1. Scheduler

1.1.1. Oozie (Apache) Schedule Hadoop jobs Combines multiple jobs sequentially into unit of work Integrated with Hadoop stack Supports jobs for MR, Pig, Hive and Sqoop + system app, Java, shell

1.2. Panel

1.2.1. Hue (Cloudera) UI for Hadoop and satellites (HDFS, MR, Hive, Oozie, Pig, Impala, Solr etc.) Webpanel Upload files to HDFS, send Hive queries etc.

1.3. Data analyze

1.3.1. Pig (Apache) High-level scripting language Can invoke from code on Java, Ruby etc. Can get data from files, streams or other sources Output to HDFS Pig scripts translated to series of MR jobs

1.4. Data transfer

1.4.1. Sqoop (Apache) Transferring bulk data between Apache Hadoop and structured datastores such as relational databases. Two-way replication with both snapshots and incremental updates. Import between external datastores, HDSF, Hive, HBase etc. Works with relational databases such as: Teradata, Netezza, Oracle, MySQL, Postgres, and HSQLDB

1.5. Visualization

1.5.1. Tableau INPUT: Can access data in Hadoop via Hive, Impala, Spark SQL, Drill, Presto or any ODBC in Hortonworks, Cloudera, DataStax, MapR distributions OUTPUT: reports, UI web, UI client Clustered. Nearly linear scalability Can access traditional DB Can explore and visualize data SQL

1.6. Security

1.6.1. Knox (Apache) Provides single-point of authentication and access to services in Hadoop cluster

1.7. Graph analytic

1.7.1. GraphX Spark part API for graphs and graph-parallel computation

2. Event Input

2.1. Kafka

2.1.1. Distributed publish-subscribe messaging system

2.1.2. High throughput

2.1.3. Persistent scalable messaging for parallel data load to Hadoop

2.1.4. Compression for performance, mirroring for HA and scalability

2.1.5. Usually used for clickstream

2.1.6. Pull output

2.1.7. Use Kafka if you need a highly reliable and scalable enterprise messaging system to connect many multiple systems, one of which is Hadoop.

2.2. Flume

2.2.1. Main use-case is to ingest data into Hadoop

2.2.2. Distributed system

2.2.3. Collecting data from many sources

2.2.4. Mostly log processing

2.2.5. Push output

2.2.6. A lot of pre-built pre-built collectors

2.2.7. OUTPUT: Write to HDFS, HBase, Cassandra etc

2.2.8. INPUT: Can use Kafka

2.2.9. Use Flume if you have an non-relational data sources such as log files that you want to stream into Hadoop.

3. Machine Learning

3.1. MLib

3.1.1. Spark implementation of some common machine learning algorithms and utilities, including classification, regression, clustering, collaborative filtering, dimensionality reduction

3.1.2. Initial contribution from AMPLab, UC Berkeley, shipped with Spark since version 0.8

3.1.3. Spark part

3.2. Mahout

3.2.1. On top of Hadoop

3.2.2. Using MR

3.2.3. Lucene part

4. Frameworks

4.1. Tez

4.1.1. Hadoop part

4.1.2. Framework to build YARN based high-perf batch/interactive apps

4.1.3. Hive and Pig based on it

4.1.4. Provides API

4.1.5. On top of YARN

4.2. YARN

4.2.1. Hadoop part

4.2.2. Resource manager

4.2.3. Security

4.3. ZooKeeper

4.3.1. Cluster coordination

4.3.2. Used by Storm, Hadoop, HBase, Elastic Search etc.

4.3.3. Group messaging and shared registers with an eventing mechanism similar

5. Execution Engine

5.1. Spark (Apache)

5.1.1. Runs through YARN or as stand-alone

5.1.2. Can work with HDFS, HBase, Cassandra, Hive data

5.1.3. General data processing similar to MR + streaming, interactive queries, machine learning etc.

5.1.4. RDD with caching

5.1.5. Stand-alone, YARN-based. Mesos-based, side-by-side on existing Hadoop deployment

5.1.6. Good for iterative algos, iterative processing and machine learning

5.1.7. Rich API and interactive shell

5.1.8. A lot of transformation and actions for RDD

5.1.9. Extremely simple syntax

5.1.10. Compatible with existing Hadoop Data

5.1.11. INPUT/OUTPUT: HDFS, HBase, FS. any datasource with Hadoop InputFormat

5.2. Hadoop MR

5.2.1. Distributed processing of large data sets across clusters of computers

5.2.2. MapReduce

5.2.3. High availability, fault tolerance

5.2.4. Good for batch processing

5.2.5. High latency

5.2.6. Doesn't use memory well

5.2.7. Iterative algos uses a lot of IO

5.2.8. Primitive API anf Key/Value IO

5.2.9. Basics like join requires a lot of code

5.2.10. Result is a lot of files

5.2.11. Hadoop part


6. Storage

6.1. NoSQL

6.1.1. Accumulo (Apache) BigTable, Key/Value, Distributed, Petabyte scale, fault tolerance Sorted, schema-less, graph-support On top of Hadoop, Zookeeper, Thrift BTrees, bloom filters, compression Uses Zookeeper, MR, have CLI Clustered. Linear scalability. Failover master NSA Per-cell ACL + table ACL Built-in user DB and plaintext on wire Iterators can be inserted in read path and table maintenance code Advanced UI

6.1.2. HBase On top of HDFS BigTable, Key/Value, Distributed, Petabyte scale, fault tolerance Clustered. Linear scalability. Failover master Sorted, RESTful FB, eBay, Flurry, Adobe, Mozilla, TrendMicro BTrees, bloom filters, compression Uses Zookeeper, MR, have CLI Column-family ACL Advanced security and integration with Hadoop Triggers, stored procedures, cluster management hooks HBase is 4-5 times slower than HDFS in batch context. HBase is for random access Stores key/value pairs in columnar fashion (columns are clubbed together as column families). Provides low latency access to small amounts of data from within a large data set. Provides flexible data model. HBase is not optimized for classic transactional applications or even relational analytics. HBase is CPU and Memory intensive with sporadic large sequential I/O access HBase supports massively parallelized processing via MapReduce for using HBase as both source and sink

6.2. Distributed Filesystems

6.2.1. HDFS Hadoop part Optimized for streaming access of large files. Follows write-once read-many ideology. Doesn't support random read/write. Good for MapReduce jobs are primarily I/O bound with fixed memory and sporadic CPU.

6.2.2. Tachyon Memory centric Hadoop/Spark compatible wo changes High performance by leveraging lineage information and using memory aggressively 40 contributors from over 15 institutions, including Yahoo, Intel, and Redhat High-throughput writes Read 200x and write 300x faster than HDFS Latency 17x times less than HDFS Uses memory (instead of disk) and recomputation (instead of replication) to produce a distributed, fault-tolerant, and high-throughput file system. Data sets are in the gigabytes or terabytes

7. Query Engine

7.1. Interactive

7.1.1. Drill (Apache) Low latency JSON-like internal data model to represent and process data Dynamic schema discovery. Can query self-describing data-formats as JSON, NoSQL, AVRO etc. SQL queries against a petabyte or more of data distributed across 10,000-plus servers Drillbit instead of MapReduce Full ANSI SQL:2003 Low-latency distributed execution engine Unknown schema on HBase, Cassandra, MongoDB Fault tolerant Memory intensive INPUT: RDBMS, NoSQL, HBase, Hive tables, JSON, Avro, Parquet, FS etc. Shell, WebUI, ODBC, JDBC, CPP API Not mature enough?

7.1.2. Impala (Cloudera) Low latency HiveQL and SQL92 INPUT: Can use HDFS or HBase wo move/transform. Can use Hive tables as metastore Integrated with Hadoop, queries Hadoop Requires specific format for performance Memory intensive (if join two tables, one of them need to be in cluster memory) Claimed as x30 faster than Hive for queries with multiple MR jobs, but it's before Stinger No fault-tolerance Reads Hadoop file formats, including text, LZO, SequenceFile, Avro, RCFile, and Parquet Fine-grained, role-based authorization with Sentry. Supports Hadoop security

7.1.3. Presto (Facebook) ANSI SQL Real-time Petabytes of data INPUT: Can use Hive, HDFS, HBase, Cassandra, Scribe Similar to Impala and Hive/Stinger Own engine Declared as CPU more efficient than Hive/MapReduce Just opened, not so big community Not fail-tolerant Read-only No UDF Requires specific format for performance

7.1.4. Hive 0.13 with Stinger A lot of MR micro-jobs HQL Can use HBase

7.1.5. Spark SQL SQL, HQL or Scala to run on Spark Can use Hive data also JDBC/ODBC Spark part

7.1.6. Shark Replaced with Spark SQL

7.2. Non-interactive

7.2.1. Hive < 0.13 Long jobs HQL Based on Hadoop and MR High latency

8. Stream processing

8.1. Spark Streaming (Berkeley)

8.1.1. Spark part

8.1.2. INPUT: Can use Kafka, Flume, Twitter, ZeroMQ, Kinesis, TCP as a source

8.1.3. OUTPUT to FS, DB, live dashboards, HDFS

8.1.4. High-throughput, fault-tolerant on live streams

8.1.5. Machine learning, graph processing, MR, join and window algos can be applied

8.1.6. Fault-tolerance: exactly one

8.1.7. Micro-batching, few seconds

8.1.8. Average latency

8.1.9. Can use YARN, Mesos

8.2. Samza (LinkedIn)

8.2.1. On YARN

8.2.2. INPUT/OUTPUT: Kafka and streams

8.2.3. Asynchronous

8.2.4. Near-realtime

8.2.5. No dynamic re-balancing

8.2.6. Work with YARN and Kafka out of the box

8.2.7. On top of Hadoop 2 / YARN

8.3. Storm (Twitter)

8.3.1. Distributed

8.3.2. Record-at-time

8.3.3. INPUT: Queue: Kestrel, Kafka, Flume, RabbitMQ, JMS, Amazon Kinesis

8.3.4. OUTPUT: Kafka, HBase, HDFS since 0.9.1

8.3.5. Real-time, sub-second latency

8.3.6. Reliable

8.3.7. Large data streams

8.3.8. Dynamic re-balancing

8.3.9. Fault-tolerance: at-least one

8.3.10. Average throughput

8.3.11. On top of Hadoop 2 / YARN

8.3.12. Can use YARN, Mesos

8.3.13. Distributed RPC