Skip to main content
Introduction to Algorithms
Algorithm in a nutshell:
- An algorithm is a well-defined procedure that takes input and produces some output.
- It is a tool for solving a well specified problem.
- It consists of well-defined computational steps.
- It takes input, applies all the computational steps to it and returns output.
- An algorithm is said to be correct if it always output the correct result irrespective of the input.
- An incorrect algorithm is the one which doesn’t output correct result for all or some of the inputs.
- However, incorrect algorithms can still be useful for some problems. In such case the main focus then becomes controlling the error rate of the algorithm.
- There may exist multiple algorithms for solving a problem. In such case, each algorithm differ from the other in terms of efficiency.
Parameters for measuring the efficiency of an Algorithm:
- Time Complexity: The time required by the algorithm for evaluating the input and producing the output.
- Space Complexity: The memory required by the algorithm for evaluating the input and producing the output.
However, here we don’t measure the time and space complexities in terms of actual time and memory required respectively. The reason behind this is simple, algorithm running on machine A will not run with same memory and time on machine B if machine A and B have different hardware. We need a generalized unit of measure so that efficiency of algorithms can be measured irrespective of the environment they are running in.
Theoretically, the normal practice is to estimate complexity of an algorithm in the asymptotic sense, i.e. use Big O notation to represent the complexity of an algorithm as a function of the size of the input n.
Common examples of Big O notation:
O(1), O(log n), O(n), O(n log n), O(n^2), O(c^n)
Where, n is the size of input and c is any constant.
For learning more about Big O notation, refer: https://goo.gl/JbERUo
Comments
Post a Comment