Algorithms, Data Structures, Computer Science, Math Created by stevendao

Lancez-Vous. C'est gratuit
ou s'inscrire avec votre adresse e-mail
Algorithms, Data Structures, Computer Science, Math Created by stevendao par Mind Map: Algorithms, Data Structures, Computer Science, Math Created by stevendao

1. Flow of Control

1.1. Iterative

1.2. Recursion

1.2.1. Traditional Recursion

1.2.1.1. The recursive call, when it happens, comes before other processing in the function

1.2.2. Tail Recursion

1.2.2.1. The processing occurs before the recursive call. Choosing between the two recursive styles may seem arbitrary, but the choice can make all the difference.

2. Math

2.1. Math Theory

2.1.1. Data Types

2.1.2. Math Properties

2.1.3. Logical Formulas

2.1.4. Set Theory

2.1.4.1. Set Operators

2.1.4.2. Set Theorems

2.2. Numbers

2.2.1. Counting

2.2.1.1. Fencepost Problem

2.2.1.1.1. 100-ft fence post

2.2.2. Prime Numbers

2.2.2.1. List of Primes (1-101)

2.2.2.1.1. 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, 101

2.2.3. GCD of Two Numbers

2.2.4. Multiplying evens and odds

2.2.5. Sum of 1+2..+n

2.2.5.1. n(n+1)/2

2.3. Discrete Math

2.3.1. Permutation

2.3.2. Combination

2.4. Modulo Tricks

3. General Knowledge

3.1. ADEPT Method

3.2. Fermi Problems

3.2.1. Solve by Dimensional Analysis

4. CS Concepts

4.1. Big O & Time Complexity

4.1.1. Big O

4.2. Bit Manipulation

4.2.1. Truth Table

4.2.2. Base 2 Table

4.2.3. Binary to Decimal Table

4.2.4. Boolean Properties

4.2.5. Bitwise Operators

4.2.5.1. AND ( & )

4.2.5.2. OR ( | )

4.2.5.3. NOT ( ~ )

4.2.5.4. XOR ( ^ )

4.2.5.5. Left Shift ( << )

4.2.5.5.1. Shift k bits and append 0 to end

4.2.5.5.2. = to multiplying the bit pattern with 2k

4.2.5.6. Right Shift ( >> )

4.2.5.6.1. Shift k bits and append 1 at the end

4.2.5.6.2. = to dividing bit pattern by 2k

4.2.6. Useful Operations

4.2.6.1. Operations on integers

4.2.6.2. Setting the Kth bit

4.2.6.3. Clearing the Kth bit

4.2.6.4. Toggle the Kth bit

4.2.6.5. Check if the Kth bit set

4.2.6.6. Check if two ints are equal

4.2.6.7. Check if number is divisible by 2

4.2.6.8. Check if a num is power of 2

4.2.6.9. Minimum of two numbers

4.2.6.10. Maximum of two numbers

4.2.6.11. GCD of two numbers

4.2.6.12. Return the right-most 1 in binary set

4.2.6.13. Swap two integers

4.2.7. XOR Properties

4.2.7.1. Without using '^'

4.2.7.2. Swapping x and y without a temp

4.3. Dynamic Programming

4.3.1. DP Explanation

4.3.1.1. Based on recurrent formula and one (or some) starting states

4.3.1.2. Sub-solution of problem is constructed from previous ones

4.3.1.3. Solutions have a polynomial complexity, faster than backtracking or brute-force

4.3.2. Solving a DP problem

4.3.2.1. 1. Split the problem into overlapping sub-problems.

4.3.2.2. 2. Solve each sub-problem recursively.

4.3.2.3. 3. Combine the solutions to sub-problems into a solution for the given problem.

4.3.2.4. 4. Don’t compute the answer to the same problem more than once.

4.4. Greedy Algorithm

4.5. Backtracking

4.5.1. General Algorithm

4.6. System Design Concepts

4.6.1. Concurrency

4.6.1.1. Concurrency

4.6.1.1.1. Concurrency is when two or more tasks can start, run, and complete in overlapping time periods. It doesn't necessarily mean they'll ever both be running at the same instant. Eg. multitasking on a single-core machine.

4.6.1.2. Parallelism

4.6.1.2.1. Parallelism is when tasks literally run at the same time, eg. on a multicore processor.

4.6.1.3. Race Condition

4.6.1.3.1. A race condition occurs when two or more threads can access shared data and they try to change it at the same time. Because the thread scheduling algorithm can swap between threads at any time, you don't know the order in which the threads will attempt to access the shared data. Therefore, the result of the change in data is dependent on the thread scheduling algorithm, i.e. both threads are "racing" to access/change the data.

4.6.1.4. Locking

4.6.1.4.1. Lock

4.6.1.4.2. Deadlock

4.6.1.4.3. Pessimistic

4.6.1.4.4. Optimistic

4.6.1.4.5. Optimistic vs. Pessimistic

4.6.1.5. Semaphore

4.6.1.5.1. A semaphore is a way to lock a resource so that it is guaranteed that while a piece of code is executed, only this piece of code has access to that resource. This keeps two threads from concurrently accesing a resource, which can cause problems.

4.6.1.6. Mutual Exclusion

4.6.1.6.1. Mutual exclusion means that only a single thread should be able to access the shared resource at any given point of time. This avoids the race conditions between threads acquireing the resource. Monitors and Locks provide the functionality to do so.

4.6.1.7. Mutex vs. Semaphore

4.6.1.8. Commits

4.6.1.8.1. Two-Phase

4.6.1.8.2. Three-Phase

4.6.1.8.3. One-Phase

4.6.2. Design Patterns

4.6.2.1. Creational

4.6.2.2. Structural

4.6.2.3. Behavioral

4.7. P vs NP Problem

4.7.1. ELI5 Explanation

4.7.2. P Problem

4.7.3. NP Problem

4.8. Deterministic vs Non-Deterministic

4.8.1. Deterministic

4.8.1.1. Same input will always produce same output, passing through same sequence of states

4.8.2. Nondeterministic

4.8.2.1. Same input can exhibit different behavior each run

5. Algorithm Theory

5.1. Classifying Algorithms

5.1.1. By Problem Domain

5.1.1.1. Numeric

5.1.1.1.1. Super large numbers, floating-point numbers with errors,...

5.1.1.2. String

5.1.1.2.1. Pattern matching, compression, sequencing, cryptography,...

5.1.1.3. Sorting

5.1.1.3.1. Putting things in order

5.1.1.4. Searching

5.1.1.4.1. Finding an element by search key

5.1.1.5. Combinatoric

5.1.1.5.1. Permutations, combinations, subsets, ...

5.1.1.6. Partitioning

5.1.1.6.1. Fair division,...

5.1.1.7. Network

5.1.1.7.1. Routing, spanning trees, connectivity, flow, ...

5.1.2. By Design Strategy

5.1.2.1. Brute Force

5.1.2.1.1. Enumerate all possible solutions, unintelligently, and try them all.

5.1.2.2. Pre-Structuring

5.1.2.2.1. Pre-arrrange data for faster processing.

5.1.2.3. Divide and Conquer

5.1.2.3.1. Break down a problem into multiple independent subproblems, solve the subproblems (recursively), and combine those solutions into a solution for the original problem.

5.1.2.4. Decrease and Conquer

5.1.2.4.1. Like Divide and Conquer but break down in to one subproblem.

5.1.2.5. Transform and Conquer

5.1.2.5.1. Reformulate problem into an equivalent problem in another domain.

5.1.2.6. Input Enhancement

5.1.2.6.1. Pre-calculate results for certain inputs, or create a cache of partial results, so that the actual run is faster.

5.1.2.7. Dynamic Programming

5.1.2.7.1. Solve an optimization problem by breaking it down into multiple overlapping subproblems, solving the subproblems (recursively), and combining those solutions into a solution for the original problem.

5.1.2.8. Greedy Algorithm

5.1.2.8.1. When searching for solution, take the most promising next step and never backtrack.

5.1.2.9. Backtracking

5.1.2.9.1. Systematically generate possible solutions to a problem sometimes having to back up when realizing your partially generated candidate can’t possibly be extended to a real solution.

5.1.2.9.2. Branch and Bound

5.1.2.10. Other Strategies

5.1.2.10.1. Hill Climbing

5.1.2.10.2. Particle Swarm

5.1.2.10.3. Las Vegas

5.1.2.10.4. Monte Carlo

5.1.3. By Complexity

5.1.3.1. Constant O(1)

5.1.3.2. Logarithmic O(logn)

5.1.3.3. Linear O(n)

5.1.3.4. Linearithmic O(nlogn)

5.1.3.5. Polynomial

5.1.3.5.1. Quadratic O(n^2)

5.1.3.5.2. Cubic O(n^3)

5.1.3.6. Exponential (k^n)

5.1.4. By Data Structure

5.1.4.1. Organizational

5.1.4.1.1. Set

5.1.4.1.2. Linear

5.1.4.1.3. Hierarchical

5.1.4.1.4. Network

5.1.4.2. Implementational

5.1.4.2.1. Hash Tables

5.1.4.2.2. Trees

5.1.4.2.3. Skip Lists

6. Data Structures

6.1. Choosing the Right Data Structure

6.2. API Methods to Know

7. Algorithms

7.1. General

7.1.1. Boyer-Moore Algorithm

7.2. Sorting

7.2.1. Concept

7.2.1.1. Stable vs Unstable

7.2.1.1.1. Stable if two objects with equal keys appear in the same order in sorted output as they appear in the input unsorted array

7.2.1.2. Quicksort vs Mergesort

7.2.1.2.1. Use Mergesort if guaranteed O(nlogn) runtime is needed. Use Quicksort otherwise.

7.2.1.2.2. Both O(nlogn) time

7.2.1.2.3. Quicksort faster on average

7.2.1.2.4. Quicksort more efficient with space

7.2.1.2.5. Quicksort is in-place

7.2.1.2.6. Mergesort uses O(n) auxiliary space

7.2.1.2.7. Quicksort worst runtime is O(n²)

7.2.1.2.8. Mergesort is guaranteed O(nlogn) runtime

7.2.2. Sorting Algorithm Comparison Chart

7.2.3. Sorting Algorithms

7.2.3.1. Bubble Sort

7.2.3.1.1. Complexity

7.2.3.1.2. Stable

7.2.3.2. Selection Sort

7.2.3.2.1. Complexity

7.2.3.2.2. Unstable

7.2.3.3. Insertion Sort

7.2.3.3.1. Left side always sorted

7.2.3.3.2. Algorithm

7.2.3.3.3. Complexity

7.2.3.3.4. Stable Sort

7.2.3.3.5. Online Algorithm

7.2.3.4. Mergesort

7.2.3.4.1. Complexity

7.2.3.4.2. Stable

7.2.3.4.3. Linked List Implementation

7.2.3.5. Quicksort

7.2.3.5.1. Complexity

7.2.3.5.2. Unstable

7.2.3.6. Heapsort

7.2.3.6.1. Complexity

7.2.3.6.2. Unstable

7.3. Searching

7.3.1. Searching Algorithms

7.3.1.1. Binary Search

7.3.1.1.1. O(logn) time

7.3.1.2. Breadth-First Search

7.3.1.3. Depth-First Search

7.3.1.4. Dijkstra's Algorithm

7.3.1.5. A*

8. Java

8.1. Concepts

8.1.1. JVM

8.1.1.1. Heap vs. Stack

8.1.2. Comparable vs. Comparator

8.1.3. hashCode() and equals() Contract

8.1.4. Simple Java Review

8.2. Java Collections

8.2.1. Collection

8.2.1.1. extends Iterable

8.2.1.2. Collection API

8.2.1.3. Subclasses

8.2.1.3.1. List API

8.2.1.3.2. Set API

8.2.1.3.3. Queue API

8.2.2. Map

8.2.2.1. Map API

8.2.2.1.1. HashMap

8.2.2.1.2. TreeMap

8.2.2.1.3. LinkedHashMap

8.2.2.1.4. IdentityHashMap

8.2.2.1.5. WeakHashMap

8.2.2.1.6. Sorted Map API

8.2.2.1.7. HashTable

8.3. Data Structure Comparisons

8.3.1. ArrayList vs. LinkedList

8.3.1.1. ArrayList

8.3.1.1.1. Random read access in constant time

8.3.1.1.2. Adding or removing requires shifting of all latter elements

8.3.1.1.3. If you add more elements than the capacity of the underlying array, a new array (1.5 times the size) is allocated, and the old array is copied to the new one, so adding to an ArrayList is O(n) in the worst case but constant on average.

8.3.1.1.4. Since the underlying implementation is an array, the array must be resized if you add a lot of elements. To avoid the high cost of resizing when you know you're going to add a lot of elements, construct the ArrayList with a higher initial capacity.

8.3.1.2. LinkedList

8.3.1.2.1. Allows for constant-time insertions or removals using iterators

8.3.1.2.2. Only sequential access of elements

8.3.1.2.3. Each element of a LinkedList has more overhead since pointers to the next and previous elements are also stored. ArrayLists don't have this overhead.

8.3.1.2.4. The main benefits of using a LinkedList arise when you re-use existing iterators to insert and remove elements. These operations can then be done in O(1) by changing the list locally only.

8.3.2. HashSet vs. TreeSet