## 1. INTRO TO ANALYSIS OF ALGORITHMS

### 1.1. WHAT IS ANALYSIS OF ALGORITHMS?

1.1.1. Algorithms can be analysed with respect to 3 aspects; CORRECTNESS; EASE OF UNDERSTANDING and COMPUTATIONAL RESOURCE CONSUMPTION

1.1.1.1. CORRECTNESS

1.1.1.1.1. Performing a specific task according to a given specification

1.1.1.2. EASE OF UNDERSTANDING

1.1.1.2.1. Functions which implement specific algorithms are usually part of larger programs which are maintained and updated by different people at various times

1.1.1.2.2. Future development teams should be able to understand the meaning behind the algorithms implemented, and what the author was trying to execute using the algorithms

1.1.1.3. COMPUTATIONAL RESOURCE CONSUMPTION

1.1.1.3.1. Algorithms use processing and memory resources when executed (and sometimes bandwidth)

1.1.1.3.2. The lower the resources used, the better the algorithm in terms of analysis

1.1.1.3.3. Important Questions for Analysing Algorithms

### 1.2. MEASURING/ESTIMATION OF TIME AND SPACE REQUIREMENTS

1.2.1. 2 APPROACHES

1.2.1.1. Empirical Approach: Implementing algorithm in a particular programming language executed in a specific machine and then somehow measuring its execution time and memory requirements.

1.2.1.1.1. Pros: - Real / Exact results - No need for calculations

1.2.1.1.2. Cons: - Machine-Dependent Results - Implementation Effort

1.2.1.2. Theoretical Approach: Making reasonable assumptions about no. of CPU operations the algorithm is going to perform, counting new variables and calculating memory space

1.2.1.2.1. Pros: - Universal Results - No Implementation Effot

1.2.1.2.2. Cons: - Approximate results - Calculations Effort

1.2.1.2.3. Working on this approach: 1. Learning about the machine model 2. Knowing the assumptions 3. Do the calculations

### 1.3. RAM MODEL

1.3.1. Random Access Machine model

1.3.1.1. Used to analyze the running time and memory requirements of the algorithms

1.3.1.2. ASSUMPTIONS

1.3.1.2.1. RUNNING TIME

1.3.1.2.2. MEMORY SPACE

### 1.4. COUNTING UP OF TIME AND SPACE UNITS

1.4.1. using pseudo code of algorithms

1.4.1.1. SIMPLE ALGORITHM (w/o loops)

1.4.1.2. COMPLEX ALGORITHM (with for loops)

## 2. GROWTH OF FUNCTIONS

### 2.1. Taking so much time and effort in counting up every single time unit is not necessary if we just want to compare differnet algorithms and select the fastest one

2.1.1. Applies to both running times and memory requirements

2.1.1.1. Growth functions aid with

2.1.1.1.1. Comparing Algorithms efficiently

2.1.1.1.2. easing the process of calculating the running time manually which is very tedious and error prone

2.1.1.2. We focus on the big values of N

2.1.1.3. Coefficients and lower-order terms are NOT important when defining growth

2.1.1.3.1. Typical Growth Functions: (fastest to slowest) 1 logN N NlogN N^2 N^3 2^N N!

### 2.2. Calculating the growth of the running time of an algorithm without counting every single time unit

2.2.1. use a GENERIC CONSTANT to refer to the time units taken by simple operations

### 2.3. Faster Computer VS Faster Algorithm

2.3.1. Investing time in a faster algorithm is ALWAYS better than investing money on a faster computer

2.3.2. Very big instances are only solved with growth functions N or NlogN, or slower

2.3.3. Quadratic and higher growth functions are only useful for small instances

## 3. WORST AND BEST CASE ANALYSIS

### 3.1. Algorithms with a running time dependant on the input data size and content

### 3.2. End of execution depends on conditions

### 3.3. Worst and Best case analysis

3.3.1. Worse Case: usually higher growth functions

3.3.2. Best Case: lower growth functions, or usually a growth function whereby T(N) = 1

## 4. ASYMPTOTIC ANALYSIS

### 4.1. WHAT IS ASYMPTOTIC ANALYSIS?

4.1.1. Analyzing algorithms when the input data is large

4.1.1.1. OMEGA NOTATION

4.1.1.1.1. proposed to analyze functions in the area of math

4.1.1.1.2. Invention: 1914, Godfrey Hardy and Joan Littlewood

4.1.1.1.3. Defines a set of functions that act as a lower bound of T(N) T(N) is omega g(N)) if: c*g(N) <= T(N) for all N >=n0; where c,n are positive constants

4.1.1.2. BIG-O NOTATION

4.1.1.2.1. used to describe the behaviour of a function when its argument tends to infinity

4.1.1.2.2. Invention: 1894, Paul Bachmann, well before modern computer architecture

4.1.1.2.3. Defines a set of functions that act as an upper bound of T(N) T(N) is O (g(N)) if: c*g(N) >= T(N) for all N >=n0; where c,n are positive constants

4.1.1.3. THETA NOTATION

4.1.1.3.1. was proposed especially for algorithm analysis in the field of computer science

4.1.1.3.2. Invention: 1970

4.1.1.3.3. Theta notation finds a single function that acts simultaneously as an upper or lower bound for our running time function or memory space function T(N) is theta (g(N)) if: c*g(N) >= T(N) for all N >=n0; where c,n are positive constants c*g(N) <= T(N) for all N >=n0; where c,n are positive constants

4.1.2. All 3 notations study the behaviour of a function where the argument tends to infinity