Kom i gang. Det er Gratis
eller tilmeld med din email adresse
Data structures af Mind Map: Data structures

1. Stacks

1.1. With const time for push, pop, min

1.1.1. O(n) additional memory for havingg min(Stepi)

1.1.2. Link

1.2. Usage

1.2.1. Stacks in memory

1.2.2. In DFS

1.2.3. For saving contexts in processors

1.3. Features

1.3.1. Unknown length

1.3.2. No need to sort

1.3.3. O(1) of getting the head

2. Heap

2.1. Tree

2.2. if B is a child node of A, then key(A) ≥ key(B)

2.3. Translation to array

2.3.1. Very simple. Rows, from left to right

2.4. Algorithms

2.4.1. Heapsort

2.5. Usage

2.5.1. For top k elements

2.5.1.1. O(nlogk)

2.5.2. Heapsort (in-memory O(nlog(n))

3. Space structures

3.1. K-D tree

3.1.1. Hyperplanes

3.1.2. To the left of hyperplane - one part, to the right - second. (as a hyperplace an X axis can be used)

4. Priority Queue

4.1. Pop high-priority element: O(1)

4.2. Implementing using heap

5. In programming

5.1. Element cost in data structures

6. Trees

6.1. Trie tree

6.1.1. Link

6.1.2. Used for storing info about words

6.1.3. Easy search

6.1.4. Usage

6.1.4.1. Dictionaries

6.1.4.2. In suggest snippets

6.2. Binary

6.2.1. Unordered

6.2.2. BST

6.2.2.1. Ordered

6.2.2.2. Algorithms

6.2.2.2.1. Given a pre-order and post-order traversal results, restore te BST

6.2.2.3. Article

6.2.2.4. Usage

6.2.2.4.1. Storing data, easy access

6.2.2.4.2. Better use balanced ones

6.3. In programming

6.3.1. Scaling

6.3.1.1. Trees scale not good. HashMap is better

6.4. Algorithms

6.4.1. Generate all possible trees

6.5. Self-balancing

6.5.1. RB tree

6.5.1.1. Rotation

6.5.1.1.1. quite simple

6.5.1.2. Rules

6.5.1.2.1. Root: black

6.5.1.2.2. Red can have only black childs

6.5.1.2.3. Black height of each leaf is const

6.5.2. AVL

6.5.2.1. First such structure

6.5.2.2. Balance factor (-1, 0, 1)

6.5.2.3. Flips

6.5.2.4. Wiki

6.5.3. Usage

6.5.3.1. For storing sorted data where access time equal to log(n) is needed

6.5.3.2. Representing hierarchies

6.6. B-tree

6.6.1. Complexity

6.6.1.1. O(logn)

6.6.2. Leaves and nodes contain elements

6.6.3. Links to next elements (organize single-linked list)

6.7. B+ tree

6.7.1. Nodes don't contain data, only leaves

6.7.2. Leaves organize wide list

6.7.2.1. Easy iteration, easy deletion

6.7.2.1.1. A big plus in block systems

6.7.3. Usage

6.7.3.1. Block data systems

6.7.3.1.1. File systems

6.7.3.1.2. RDBMS (storing blobs?)

6.8. 2-4 Trees

6.8.1. Like B trees with 2,3 or 4 children

6.9. Interval tree

6.9.1. Usage

6.9.1.1. Queues

6.9.1.2. Graphics (finding overlap zones)

6.10. General usage

6.10.1. Hierarchies

6.10.2. DOM, XML

6.11. Fenwick tree

6.11.1. Quick calculation sum of elements from i to j in arrays

7. Linked list

7.1. Algorithms

7.1.1. Determine whether has cycles

7.1.1.1. Article

7.1.1.2. Idea

7.1.1.2.1. 2 pointers

7.1.1.2.2. 1 pointer has speed 1 el/iteration, another - 2 el/iteration

7.1.1.2.3. If they visit before the first pointer reaches the end - has cycles

7.1.1.2.4. O(n) worst case, no additional memory

7.1.2. Reverse linked list

7.1.2.1. Article

7.1.2.2. O(n)

7.2. Usage

7.2.1. For queries

7.2.1.1. Messaging

7.2.1.2. Scheduling

7.2.1.3. In BFS, DFS algorithms

7.2.2. Where number of items is unknown

8. Graphs

8.1. Algorithms

8.1.1. Traversal

8.1.1.1. BFS

8.1.1.1.1. Nearest elements are added to the end of query

8.1.1.2. DFS

8.1.1.2.1. Nearest els are added to the begin of the query

8.1.2. Prim apgorithm - find the minimal spanning tree

8.1.2.1. Select any vertex

8.1.2.2. Each step add to the tree the nearest edge with min weight

8.1.3. Does represent a tree?

8.1.3.1. No cycles

8.1.3.2. Linked

8.1.3.3. Article

8.1.4. Longest path in graph

8.1.4.1. Article

8.1.5. Paths rom all to all

8.1.5.1. Warshall-Floyd algorithm

8.1.5.1.1. O(n^3)

8.1.5.1.2. Link

8.1.6. Min path

8.1.6.1. Dijkstra algorithm

8.1.7. A*

8.1.7.1. Improvement of Dijkstra algorithm

8.1.7.2. Uses heuristics

8.1.7.3. Result is sum of 2 functions: minDistance + h(x). h(x) is heuristical function

8.1.7.3.1. Named 'best-first'

8.2. Directed

8.2.1. weighted

8.2.1.1. Dijkstra algorithm - find min distance from one vertex to anothers

8.2.1.1.1. Link

8.2.1.2. Impl

8.3. Acyclic

8.3.1. Path between two vertices

8.3.1.1. Article

8.4. Usage

8.4.1. Representing complex structures

8.4.2. Modelling (networks, etc)

9. HashMap

9.1. Implementation

9.1.1. non-threadsafe

9.1.1.1. Article

9.1.1.2. hash function

9.1.1.2.1. h ^= (h >>> 20) ^ (h >>> 12); return h ^ (h >>> 7) ^ (h >>> 4);

9.1.1.3. default initial capacity - MUST be a power of two.

9.1.1.4. Max capacity MUST be a power of two

9.1.1.5. load factor

9.1.1.6. Index of element

9.1.1.6.1. hash & (length - 1)

9.1.2. threadsafe

9.1.2.1. Simple

9.1.2.1.1. Lack of scalability bacause of locks

9.1.2.2. ConcurrentHashMap

9.1.2.2.1. Segments

9.1.2.2.2. Further improvement with buckets

9.1.2.2.3. coucurrencyLevel

9.1.2.3. Article

9.2. Usage

9.2.1. Caches

9.2.2. Storing data with keys

9.2.3. One of most useful data structures

10. Array/List

10.1. Algorithms

10.1.1. Sorting

10.1.1.1. Bubble sort

10.1.1.1.1. One of the slowest

10.1.1.1.2. O(n^2)

10.1.1.1.3. Complete loop from 0, 1...n-1 el

10.1.1.2. Merge sort

10.1.1.2.1. O(nlogn)

10.1.1.2.2. Article

10.1.1.2.3. Idea

10.1.1.3. Insertion ort

10.1.1.3.1. Worst case: O(n^2)

10.1.1.3.2. Best case: O(n)

10.1.1.3.3. Idea

10.1.1.4. Bucket sort

10.1.1.4.1. Article

10.1.1.5. Quicksort

10.1.1.5.1. Complexity

10.1.1.5.2. Idea

10.1.1.6. Table of complexity

10.1.1.7. Radix sort

10.1.1.7.1. O(mk)

10.1.1.7.2. Sort last set of digits, then sort previous etc

10.1.1.8. Heapsort

10.1.1.8.1. in-memory

10.1.1.8.2. Complexity

10.1.1.9. Counting sort

10.1.1.9.1. Linear time, linear space

10.1.1.9.2. Used for data in small ranges

10.1.2. Max increasing sequence in an array

10.1.2.1. Article

10.1.2.2. Complexity

10.1.2.2.1. Time: O(n^2)

10.1.2.2.2. Memory: O(n)

10.1.2.3. Idea

10.1.2.3.1. Use accumulatedLength for each element

10.1.3. Min and max elements

10.1.3.1. Recursive

10.1.3.1.1. Sort (nlogn)

10.1.3.1.2. Find (n)

10.1.3.2. Non-recursive

10.1.3.2.1. O(n)

10.1.3.3. Article

10.1.4. Find firt pair of dublicates

10.1.4.1. Additional memory

10.1.4.1.1. HashMap with occurences

10.1.4.1.2. O(n) time O(n) worst case map

10.1.4.2. Without additional memory

10.1.4.2.1. O(nlogn)

10.1.5. Find common elements in 2 arrays

10.1.5.1. Article

10.1.5.2. Idea

10.1.5.2.1. Sort both arrays: O(m) and O(n)

10.1.5.2.2. Walt through both: O(m+n)

10.1.6. Find a number in a sorted array

10.1.6.1. Idea

10.1.6.1.1. Dichotomy method

10.1.6.2. Article

10.1.7. Find a median in distributed collection on thouands of servers

10.1.7.1. Article

10.1.7.2. Idea

10.1.7.2.1. Median of medians

10.1.7.2.2. O(n)

10.1.8. Maximum contigious ubsequent sum problem

10.1.8.1. Article

10.1.8.2. Idea

10.1.8.2.1. Walk from 0 to n-1

10.1.8.2.2. If accumulated sum is negative - reset

10.2. Usage

10.2.1. For storing indexed data

10.2.2. When element size matters (approx 8 times less than other data structures)

11. Probabilistic

11.1. Bloom Filter