Big 0 Notation and performance
Big O notation:
What is big O notation ? It is a very interesting question for the programming world. Here I will explain in detail my knowledge of the subject of big O notation and its performance.
In a short description, the big ‘oh’ is the ratio of time or ‘is of the same order as’ calculation in a programming algorithm. When an algorithm runs in a program it calculates the time while an input grows, as the relationship shows between input and run time of an algorithm. The big O ‘oh’ stands for the worst case scenario of a function. One myth for the learners grew in mind the “execution time of a programing is big ‘oh’’ it’s totally nonsense. - stay out of it.
Necessity of big ‘oh’ : in computer science to understand time and space complexity so that the measurement of an algorithm in terms of running the specific programming situation along with decision making for the program.
Let’s explain
If we have multiple implementations of the same function then how can we determine which is the best ! We need fast , less memory intensive and clear code .
Let a simple function run and execute in vs code
Above code we run a function of adding an n number and get calculate time of algorithm which result time used : 0.000022339001297950744 in second
let see another code snippet
Above code we run a function of adding an n number and get calculate time of algorithm which result time used :time used : 0.0148699950017035 in second
Look at the time calculation and it happens in different machines at different times.
Now let discuss some code
TIME COMPLEXITY !
The first function
const AddSum=(n)=>{
return n * (n+1)/2
}
here only three constant use( * ,+, / ) and it runs a constant time regardless of input size. The growth of a function is constant. So the run time will be O(1) constant time, and The big O is O(1)
Some role for TC applicable :
- Arithmetic operations are constant
- Variables assignment are constant
- Accessing elements of an array is also constant
- In a loop the complexity of time is the length of a loop. If n grows as the loop grows
The second function
const AddSum =(n)=>{
let total =0
for(let i=0; i<=n; i++){
total += i
}
return total
}
Here 5 operation uses (=,+, =<, ++, =) and big O is O(n) it means as the input grows the run time will grow proportional.
const getAllNum=(n)=>
{
for(let i=0; i < n; i++)
{
for(let j=0; j< n ; j++)
{
console.log(i,j)
}
}
}
Here the loops run two times. Each loop has O(n) time operation. order so O(n)+O(n) = O(n2) .if n grows twice/double the number of operations will run twice / double.
SPACE COMPLEXITY:
As the time complexity space complexity depends on how much memory or additional memory will need to allocate at the run time of an algorithm.
Some role to be noted
- Boolean , numbers, undefined are constant space ( in JS lang)
- String is O(n) cause n is string length
- All the references are O(n) Now examine the code for space complexity
const AddSum =(array)=>{
let total =0
for(let i=0; i<=array.length; i++){
total += i
}
return total
}
Here total is number , i is number they are constant. The total will grow as the size of an array. So space complexity here space is O(1) cause no matter the array size but the numbers are always the same. Another example 👍
cosnt countDouble =(array)=>{
let newArray=[]
for(let i=0; i<=array.length; i++)
{
newArray.push(2 * array[i])
}
return newArray
}
Here the new array will take space O(n) cause input size of array will take space
general discussion
TC: O(1) -> order of one : the constant of run time complexation.
Generally in a TC some constants and variables are avoided. Because the constant does not affect much in TC, unlike variables that can change its value . Constants like fixed values are avoidable like pie , gravity= 9.8 G etc.
Suppose var a= 4 ,b= 3, result = a+b the TC is O(1)+O(1)+O(1) = O(3)
See the below picture
TC : order on n O(n) : a linear relationship of TC
If the data size increases the time will increase. Suppose there is two loops run like
for(){--
------ }
while(){------ }
Here the calculation is like O(n)+ O(n) = 2 O(n)= O(n)
See the below picture
TC order of n2 : O(n2): quadratic run time complexity
Suppose in a nested loops n2 is like :
for(let i=0; i<n; i++){
for(let j=0; j < n; j++){
}
}
Above code the two nested loops run as the base of 2 input size i mean n is double. So the TC will be O(n * n) = O(n2) . Here the algorithm will be much more time consuming as the size of input increases
See the picture bellow:
TC: O(log n) it is logarithmic runtime 👍
In log n TC the problem is divided by two , like a binary search when a data need to find from a sorted array we use the array divided by two the whole input and ahed the solution. Unlike O(1) it is faster for run time calculation.
The log n is shown the algorithm. The array will be divided into two parts each iteration, so the size of input will decrease.
See the picture bellow
continue ... more writing...