Algorithms, Data Structures, Computer Science, Math Created by stevendao

Comienza Ya. Es Gratis
ó regístrate con tu dirección de correo electrónico
Algorithms, Data Structures, Computer Science, Math Created by stevendao por Mind Map: Algorithms, Data Structures, Computer Science, Math Created by stevendao

1. Flow of Control

1.1. Iterative

1.2. Recursion

1.2.1. Traditional Recursion The recursive call, when it happens, comes before other processing in the function

1.2.2. Tail Recursion The processing occurs before the recursive call. Choosing between the two recursive styles may seem arbitrary, but the choice can make all the difference.

2. Math

2.1. Math Theory

2.1.1. Data Types

2.1.2. Math Properties

2.1.3. Logical Formulas

2.1.4. Set Theory Set Operators Set Theorems

2.2. Numbers

2.2.1. Counting Fencepost Problem 100-ft fence post

2.2.2. Prime Numbers List of Primes (1-101) 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, 101

2.2.3. GCD of Two Numbers

2.2.4. Multiplying evens and odds

2.2.5. Sum of 1+2..+n n(n+1)/2

2.3. Discrete Math

2.3.1. Permutation

2.3.2. Combination

2.4. Modulo Tricks

3. General Knowledge

3.1. ADEPT Method

3.2. Fermi Problems

3.2.1. Solve by Dimensional Analysis

4. CS Concepts

4.1. Big O & Time Complexity

4.1.1. Big O

4.2. Bit Manipulation

4.2.1. Truth Table

4.2.2. Base 2 Table

4.2.3. Binary to Decimal Table

4.2.4. Boolean Properties

4.2.5. Bitwise Operators AND ( & ) OR ( | ) NOT ( ~ ) XOR ( ^ ) Left Shift ( << ) Shift k bits and append 0 to end = to multiplying the bit pattern with 2k Right Shift ( >> ) Shift k bits and append 1 at the end = to dividing bit pattern by 2k

4.2.6. Useful Operations Operations on integers Setting the Kth bit Clearing the Kth bit Toggle the Kth bit Check if the Kth bit set Check if two ints are equal Check if number is divisible by 2 Check if a num is power of 2 Minimum of two numbers Maximum of two numbers GCD of two numbers Return the right-most 1 in binary set Swap two integers

4.2.7. XOR Properties Without using '^' Swapping x and y without a temp

4.3. Dynamic Programming

4.3.1. DP Explanation Based on recurrent formula and one (or some) starting states Sub-solution of problem is constructed from previous ones Solutions have a polynomial complexity, faster than backtracking or brute-force

4.3.2. Solving a DP problem 1. Split the problem into overlapping sub-problems. 2. Solve each sub-problem recursively. 3. Combine the solutions to sub-problems into a solution for the given problem. 4. Don’t compute the answer to the same problem more than once.

4.4. Greedy Algorithm

4.5. Backtracking

4.5.1. General Algorithm

4.6. System Design Concepts

4.6.1. Concurrency Concurrency Concurrency is when two or more tasks can start, run, and complete in overlapping time periods. It doesn't necessarily mean they'll ever both be running at the same instant. Eg. multitasking on a single-core machine. Parallelism Parallelism is when tasks literally run at the same time, eg. on a multicore processor. Race Condition A race condition occurs when two or more threads can access shared data and they try to change it at the same time. Because the thread scheduling algorithm can swap between threads at any time, you don't know the order in which the threads will attempt to access the shared data. Therefore, the result of the change in data is dependent on the thread scheduling algorithm, i.e. both threads are "racing" to access/change the data. Locking Lock Deadlock Pessimistic Optimistic Optimistic vs. Pessimistic Semaphore A semaphore is a way to lock a resource so that it is guaranteed that while a piece of code is executed, only this piece of code has access to that resource. This keeps two threads from concurrently accesing a resource, which can cause problems. Mutual Exclusion Mutual exclusion means that only a single thread should be able to access the shared resource at any given point of time. This avoids the race conditions between threads acquireing the resource. Monitors and Locks provide the functionality to do so. Mutex vs. Semaphore Commits Two-Phase Three-Phase One-Phase

4.6.2. Design Patterns Creational Structural Behavioral

4.7. P vs NP Problem

4.7.1. ELI5 Explanation

4.7.2. P Problem

4.7.3. NP Problem

4.8. Deterministic vs Non-Deterministic

4.8.1. Deterministic Same input will always produce same output, passing through same sequence of states

4.8.2. Nondeterministic Same input can exhibit different behavior each run

5. Algorithm Theory

5.1. Classifying Algorithms

5.1.1. By Problem Domain Numeric Super large numbers, floating-point numbers with errors,... String Pattern matching, compression, sequencing, cryptography,... Sorting Putting things in order Searching Finding an element by search key Combinatoric Permutations, combinations, subsets, ... Partitioning Fair division,... Network Routing, spanning trees, connectivity, flow, ...

5.1.2. By Design Strategy Brute Force Enumerate all possible solutions, unintelligently, and try them all. Pre-Structuring Pre-arrrange data for faster processing. Divide and Conquer Break down a problem into multiple independent subproblems, solve the subproblems (recursively), and combine those solutions into a solution for the original problem. Decrease and Conquer Like Divide and Conquer but break down in to one subproblem. Transform and Conquer Reformulate problem into an equivalent problem in another domain. Input Enhancement Pre-calculate results for certain inputs, or create a cache of partial results, so that the actual run is faster. Dynamic Programming Solve an optimization problem by breaking it down into multiple overlapping subproblems, solving the subproblems (recursively), and combining those solutions into a solution for the original problem. Greedy Algorithm When searching for solution, take the most promising next step and never backtrack. Backtracking Systematically generate possible solutions to a problem sometimes having to back up when realizing your partially generated candidate can’t possibly be extended to a real solution. Branch and Bound Other Strategies Hill Climbing Particle Swarm Las Vegas Monte Carlo

5.1.3. By Complexity Constant O(1) Logarithmic O(logn) Linear O(n) Linearithmic O(nlogn) Polynomial Quadratic O(n^2) Cubic O(n^3) Exponential (k^n)

5.1.4. By Data Structure Organizational Set Linear Hierarchical Network Implementational Hash Tables Trees Skip Lists

6. Data Structures

6.1. Choosing the Right Data Structure

6.2. API Methods to Know

7. Algorithms

7.1. General

7.1.1. Boyer-Moore Algorithm

7.2. Sorting

7.2.1. Concept Stable vs Unstable Stable if two objects with equal keys appear in the same order in sorted output as they appear in the input unsorted array Quicksort vs Mergesort Use Mergesort if guaranteed O(nlogn) runtime is needed. Use Quicksort otherwise. Both O(nlogn) time Quicksort faster on average Quicksort more efficient with space Quicksort is in-place Mergesort uses O(n) auxiliary space Quicksort worst runtime is O(n²) Mergesort is guaranteed O(nlogn) runtime

7.2.2. Sorting Algorithm Comparison Chart

7.2.3. Sorting Algorithms Bubble Sort Complexity Stable Selection Sort Complexity Unstable Insertion Sort Left side always sorted Algorithm Complexity Stable Sort Online Algorithm Mergesort Complexity Stable Linked List Implementation Quicksort Complexity Unstable Heapsort Complexity Unstable

7.3. Searching

7.3.1. Searching Algorithms Binary Search O(logn) time Breadth-First Search Depth-First Search Dijkstra's Algorithm A*

8. Java

8.1. Concepts

8.1.1. JVM Heap vs. Stack

8.1.2. Comparable vs. Comparator

8.1.3. hashCode() and equals() Contract

8.1.4. Simple Java Review

8.2. Java Collections

8.2.1. Collection extends Iterable Collection API Subclasses List API Set API Queue API

8.2.2. Map Map API HashMap TreeMap LinkedHashMap IdentityHashMap WeakHashMap Sorted Map API HashTable

8.3. Data Structure Comparisons

8.3.1. ArrayList vs. LinkedList ArrayList Random read access in constant time Adding or removing requires shifting of all latter elements If you add more elements than the capacity of the underlying array, a new array (1.5 times the size) is allocated, and the old array is copied to the new one, so adding to an ArrayList is O(n) in the worst case but constant on average. Since the underlying implementation is an array, the array must be resized if you add a lot of elements. To avoid the high cost of resizing when you know you're going to add a lot of elements, construct the ArrayList with a higher initial capacity. LinkedList Allows for constant-time insertions or removals using iterators Only sequential access of elements Each element of a LinkedList has more overhead since pointers to the next and previous elements are also stored. ArrayLists don't have this overhead. The main benefits of using a LinkedList arise when you re-use existing iterators to insert and remove elements. These operations can then be done in O(1) by changing the list locally only.

8.3.2. HashSet vs. TreeSet