We need to analyze algorithms to choose the most efficient algorithm to implement. One important aspect we look at is the **running time** of an algorithm, which often depends on the **size of input**. (See Chapter 2-2 of Introduction to Algorithms.)

### Size of Input

Examples in measuring the input size of a problem:

- The
**number of items**in the input. E.g., in the insertion sort algorithm, the number of elements in the array or list that are to be sorted. - The
**total number of bits**needed to represent the input in binary notation. A larger number needs more bits. - A formula that depends on two or more of the measures.

### Running Time

We calculate running time of an algorithm by summing up the execute time of all lines of its pseudocode. The execute time (**cost**) of each line is the multiple of the **number of times** it takes to execute a step or line of the pseudocode by the number of times that line of code execute.

E.g., for the insertion sort algorithm, depending on how sorted the given list is, the running time in the **best case** is a **linear function** of n while the running time in the **worst case** is a **quadratic function** of n.

Detailed calculation of running time may not be practical. We usually focus on:

- The
**worse case running time**because it guarantee that the algorithm solves the problem after that time, which is important. - The
**rate of growth**or**order of growth**(**Θ**)of the running time, which is the term with the highest order of input size from the running time formula. E.g., insertion sort has a worse case running time of**Θ(n^2)**.

Insertion sort, with running time of Θ(n^2), is not as efficient as **merge sort**, with running time of **Θ(n)**. See this post.