Computational Complexity
We are living in a golden age of digital technology, with more mobile devices than people on the planet. Still, a small percentage of people consider the complexities of these interdependent systems and how they contribute to overall complexity.
The Role of Programs
Every digital device runs on programs, which are human-provided instructions to computers and add it to the system. Although most of these applications are binary (1 or 0), hexadecimal (0-9, A-F) is frequently used on newer systems. Writing code entails converting it into binary or hexadecimal code that the computer can execute, which increases the complexity of the process.
Challenges in Code Writing
So, how should we write codes? You would go the quickest route, nevertheless. Our goal is to reduce the amount of time needed to run the program (Challenge B) and transform the code into binary (which I’ll refer to as Challenge A).
Challenge A: Code Conciseness
The goal is to write concise code in order to reduce the complexity involved in converting it. It can often make deciphering vast amounts of code challenging. Comparatively speaking, consider deciphering something. Would it take 100 requests or just one to complete?
Challenge B: Execution Time
The execution of a program must be done efficiently. Reducing the amount of time a software takes to operate is the aim.
Complexity: Measuring Program Speed
How would we go about judging the speed of various programs? Finding the time from start to end can seem easy at first, but it’s not. To determine whether this is a useful measure or not, we will monitor three things:
- If it stays the same for different implementations, that intuitively should be the same.
- If it stays the same for the same program, it is done again
- If it stays the same for different computers. So, without further ado, let’s get started!
The first approach we’ll examine is the straightforward one, which involves timing the entire process. But as I just mentioned, there are more workable answers. Now let’s examine the three tests. The first one won’t work since the CPU’s level of utilisation can affect the time. For the second one, the same reasoning applies. Finally, because different computers have varying amounts of processing power, the third one will also not function. Seems like a terrible beginning! We shall measure better in the future. This method counts the number of lines and multiplies the code inside by the number of times it is repeated when it comes across a repeat statement. View our tests now.
- This measurement suggests that it might not function because creating a counter or doing something similar slows it down.
- Since it doesn’t involve counting anything that varies depending on the system, this will function. This is fairly effective since the same reasoning applies to three.
Now, we have only one challenge left. If the input size is n, “Big O Notation” drops the constants and removes all terms except the highest. So, 2n+3 becomes “n” and 3n^2 + 6n + 5(n^2 means n*n) becomes “n^2”.
Does this work?
Test 1: It works! Dropping constants removes the risk of different times for different implementations.
Test 2: This will also work as it has nothing to do with counting something that changes based on the system. The same thing works for test 3. So, Big O notation
is the correct measurement!
Understanding Types of Speed
Now that we’ve found the correct measurement let’s think about the different types of speed. I’ve listed the most common ones down and plotted them to explain their complexity.
The most common complexities here, from worst to best:
- O(n!):- n! Means n*(n-1)*(n-2)*..*2*1 so that can be extremely large.
- O(2^n):- This means 2 multiplied by itself n times.
- O(n^2):- This means n*n.
- O(n log n):- log(n) is a really slowly growing function, so O(n log n) is worse than O(n) but better than O(n^2).
- O(n):- This is quite self-explanatory; it gets larger at the same speed n gets larger.
- O(log n):- As mentioned above, log(n) grows very slowly so it is very good.
- O(1):- This always takes constant time and doesn’t change even when n is extremely large.
Conclusion
Navigating the complexities of code efficiency involves addressing challenges, adopting appropriate measurement methods, and understanding scalability through Big O Notation. As we contemplate the spectrum of speed complexities, from the challenging factorial growth to the efficient constant time, we gain insights essential for navigating the intricacies of modern programming.
To stay updated with the latest developments in STEM research, visit ENTECT Online. This is our digital magazine for science, technology, engineering and mathematics.