Hello Friends, This is Deepak From TechWithCode.com. Today this will be our First class of Data Structure and Algorithms in which we will learn about the introduction of Algorithms. The followings are the Points which we will study today. I will try my best to keep the language as easy as possible.
- Introduction to Algorithm
- Example of Algorithm
- Algorithms Design Approach
- Top-Down Approach
- Bottom-Up Approach
- Time Complexity
- Types of Time Complexity
- Best Case Time Complexity
- Worst Case Time Complexity
- Average Case Time Complexity
- Big O Notation
- Space complexity
If you think this helps you a bit then please share and comment below. Thanks
1. Introduction to Algorithm
An algorithm is a finite set of steps required to solve a problem.
An algorithm must have the following properties:
Input:
An algorithm must have zero or more quantities as input whcih are externally supplied.
Output:
An algorithm must produce one or more output after processing set of statements.
Definiteness:
Each instruction must be clear and distinct.
Finiteness:
The algorithm must terminate after a finite number of steps.
Effectiveness:
Each operation must be definite also it should be feasible
2. Example of Algorithm
1. Start
2. Accept size for an array : (Read size)
3. Accept array elements values from user i.e. Array elements.
4. Accept element to be searched from user i.e. Read Value
5. Set i=0,flag=0
6. Compare A[i] with value
If(A[i] is a value)
Set flag=1 go to step 8
Else
Move to next data element
i= i+1;
7. If (i<=n) go to step 6
8. If(flag=1) then
Print found and Return i as position
Else
Print not found
9. Stop.
3. Algorithms Design Approach
The way of designing an algorithm to solve a problem
There are Two Algo Design Approaches are available
- Top-Down Approach:
- Bottom-Up Approach:
4. Top-Down Approach
A top-down approach starts with identifying major components of the system or program decomposing them into their lower-level components & iterating until the desired level of module complexity is achieved.
In this, we start with the topmost module & incrementally add modules that are called.
It takes the form of a step-wise procedure.
This solution is divided into sub task and each sub-task further divided into the smallest subtask.
The sub-tasks are then combined into a single solution.
Basically, we go from top to down while designing an algorithm by this top-down approach
5. Bottom-Up Approach
It is the inverse of the top-down method.
A bottom-up approach starts with designing the most basic or primitive component & proceeds to higher-level components.
Starting from the very bottom, operations that provide a layer of abstraction are implemented.
The programmer may write code to perform basic operations then combined those to make modules, which are finally combined to form the overall system structure.
Basically, we go from bottom to top while designing an algorithm by this Bottom-Up approach
6. Time complexity
The Time Complexity of the program/algorithm is the amount of computer time that it needs to run to completion.
While calculating time complexity, we develop frequency count for all key statements which are important.
How to determine the Time Complexity of an algorithm?
In order to learn about finding the Time complexity of an Algo, Let's consider an example:
Consider three algorithms given below:-
Algorithm A:- a=a+1
Algorithm B:-
for x=1 to n step
a=a+1
Loop
Algorithm C:-
for x=1 to n step 1
for y=1 to n step 2
a=a+1
loop
Explanation:
The frequency count for algorithm A is 1 as a=a+1 statement will execute only once.
The frequency count for algorithm B is n as a=a+1 is a key statement that executes “n‟ times as the loop runs “n‟ times.
The frequency count for algorithm C is n2 as a=a+1 is a key statement that executes n2
(n Square) times as the inner loop runs n times, each time the outer loop runs and the outer loop also runs for n times.
7. Types of Time Complexity
There are different types of time complexities that can be analyzed for an algorithm:
- Best Case Time Complexity:
- Worst Case Time Complexity:
- Average Case Time Complexity:
8. Best Case Time Complexity
It is a measure of minimum time that the algorithm will require for the input of size “n‟.
The running time of many algorithms varies not only for inputs of different sizes but also for the input of the same size.
For example in the running time of some sorting algorithms, sorting will depend on the ordering of input data. Therefore if input data of “n‟ items are presented in sorted order, operations performed by the algorithm will take the least time.
9. Worst Case Time Complexity
It is a measure of maximum time that the algorithm will require for the input of size “n‟.
Therefore if various algorithms for sorting are taken into account & say “n‟ input data items are supplied in reverse order for any sorting algorithm, the algorithm will require n2 operations to perform sort which will correspond to the worst-case time complexity of the algorithm.
10. Average Case Time Complexity
The time that an algorithm will require to execute typical input data of size “n‟ is known as average-case time complexity.
We can say that value that is obtained by averaging the running time of an algorithm for all possible inputs of size “n‟ can determine average-case time complexity.
11. Big O Notation
Big O notation is used in Computer Science to describe the performance or complexity of an algorithm.
Big O specifically describes the worst-case scenario and can be used to describe the execution time required or the space used (e.g. in memory or on disk) by an algorithm.
O(1)
O(1) describes an algorithm that will always execute in the same time (or space) regardless of the size of the input data set.
E.g Push and POP operation for a stack
O(N)
O(N) describes an algorithm whose performance will grow linearly and in direct proportion to the size of the input data set.
Big O notation will always assume the upper limit.
E.g. Linear search with unsorted data.
O(N2) [ Order of N Square ]
O(N2) represents an algorithm whose performance is directly proportional to the square of the size of the input data set.
This is common with algorithms that involve nested iterations over the data set.
E.g. Comparing two-dimensional arrays of size n.
O(log N). Logarithmic Time
The iterative halving of data sets described in the binary search example produces a growth curve that peaks at the beginning and slowly flattens out as the size of the data sets increase
E.g. Binary search:
an input data set containing 10 items takes one second to complete, a data set containing 100 items takes two seconds, and a data set containing 1000 items will take three seconds.
O(N log N). Logarithmic Time
E.G. More advanced sorting algorithms: quick sort, merge sort.
12. Space complexity
The space complexity of a program/algorithm is the amount of memory that it needs to run to completion.
The space needed by the program is the sum of the following components.
Fixed space requirements:-
Fixed space is not dependent on the characteristics of the input and outputs.
Fixed space consists of space for simple variables, fixed-size variables, etc.
Variable space requirements:-
Variable space includes space needed by variables whose size depends upon the particular problem being solved, referenced variables, and the stack space required for recursion on particular instances of variables.
e.g. Additional space required where function uses recursion.
No comments:
Do not add any link in the comments.
For backlink, contact us.