Big Data Fundamentals

Iniziamo. È gratuito!
o registrati con il tuo indirizzo email
Big Data Fundamentals da Mind Map: Big Data Fundamentals

1. Use cases

2. Database Architecture

2.1. ACID

2.1.1. Atomocity

2.1.1.1. All or nothing for transactions

2.1.2. Consistency

2.1.2.1. Only valid data is saved

2.1.3. Isolation

2.1.3.1. such as two people purchasing the last ticket

2.1.4. Durability

2.1.4.1. AZ failure

3. OLTP vs OLAP

3.1. OLTP

3.1.1. Many users, constant transactions

3.1.2. Critical to the business

3.1.3. GB

3.2. OLAP

3.2.1. Analytics

3.2.2. Periodic large updates and complex queries

3.2.3. TB/PB

4. Big data

4.1. volume

4.1.1. amount of data

4.2. velocity

4.2.1. frequency

4.3. variety

4.3.1. contents of the data

5. Components

5.1. Apache Hadoop

5.1.1. Scalable storage and batch processing system

5.1.2. Compliments existing systems by scaling

5.1.3. HDFS

5.1.3.1. Distributed File Systems

5.1.4. YARN

5.1.4.1. Scheduling and executing

5.1.5. Map Reduce

5.1.5.1. YARN-based for processing large data sets on the cluster

5.1.6. How does this work?

5.1.6.1. Architecture

5.1.6.1.1. Name Nodes

5.1.6.1.2. Data Nodes

5.1.6.2. Process

5.1.6.2.1. Client writes to a name node (64MB chunk)

5.1.6.2.2. The data node replicates to 2 other nodes

5.1.7. Big data job types

5.1.7.1. MapReduce

5.1.7.1.1. Parallel processing of large data sets

5.1.7.1.2. Map - into separate nodes

5.1.7.1.3. Reduce - aggregate the outputs

5.1.7.1.4. Partition - which reduces receives the kvp from the mapper

5.1.7.1.5. shuffler - transfers the data from the mappers to the reducers

5.1.7.1.6. Sorted - by keys as it arrives to the reducers

5.1.7.2. Hive

5.1.7.2.1. data warehousing infrastructure

5.1.7.2.2. HIVE query language - HQL

5.1.7.2.3. Allows unstructured data as if it were structured

5.1.7.2.4. Allows ad hoc querying and analysis

5.1.7.3. Pig

5.1.7.3.1. Programming environment for data tasks

5.1.7.3.2. Pig Latin - procedural map reduce with HQL

6. Data Warehouse

6.1. Uses ETL to ingest the data

6.2. Data Mart

6.2.1. A smaller DWH

6.2.2. Covers a subset of the data --> faster and easier