The Complexity of Knowing the Time Complexity of Your Algorithm

Let me preface by saying that a little less than 6 months ago is when I first decided to dive into coding. Prior to that I had literally never even taken a second to consider what really went into allowing me to surf the web, scroll through instagram posts, watch stupid videos on YouTube, or message a friend all from the convenience of my phone and do all of this simultaneously I might add. So after watching some tutorials and taking a beginners course through Corsera I decided that coding was definitely something that interested me and I was eager to continue to consume as much knowledge as possible. With careful consideration and talking with a few people who had some experience in this field I decided that a coding bootcamp was the next right move. After some more careful consideration and spending hours mulling over reviews and course outlines I knew in my heart that Hack Reactor was the right choice.

After going through a rigorous 3 month process to get in and prepare for the bootcamp, week 1 finally came! Right off the bat we dove into learning about different Data Structures. Up until this point I was just sure arrays and objects were all there was to it and I had become fairly fluent in accessing their values and manipulating them in the required manner but that was just “up until this point”…

In one of our first lectures while we were learning the new types of data structures such as trees, linked lists, hash tables, graphs so on and so forth and figuring out how to implement them and traverse through them to access the data I was introduced to what is known as big O notation…

Wait a second pause.. Run that back one more time!

Big O notation is used in Computer Science to describe the performance or complexity of an algorithm. Big O specifically describes the worst-case scenario, and can be used to describe the execution time required or the space used (e.g. in memory or on disk) by an algorithm.

So with a combination of math being my nemesis back in High School, still being new to coding and also trying to wrap my head around these new data structures now i've also gotta worry about big O notation and how much time and space my algorithms will use?? GREAT.

So if you find yourself in the same predicament as me and are struggling to wrap your head around this concept I have found that looking at examples in code can be extremely beneficial.

O(1) or Constant

O(1) describes an algorithm that will always execute in the same time (or space) regardless of the size of the input data set. Such as accessing the first or last element in an array.

O(N) or Linear

O(N) describes an algorithm whose performance will grow linearly and in direct proportion to the size of the input data set. Such as adding all the elements in an array of integers.

O(N**2) or Quadratic

O(N**2) represents an algorithm whose performance is directly proportional to the square of the size of the input data set. This is common with algorithms that involve nested iterations over the data set. Deeper nested iterations will result in O(N**3), O(N**4) etc. An example would be checking to see if a array of integers contains any of the same values.

Or a Bubble Sort Algorithm

O(2**N) or Exponential

O(2**N) denotes an algorithm whose growth doubles with each addition to the input data set. The growth curve of an O(2**N) function is exponential — starting off very shallow, then rising meteorically. Such as finding the nth number in the Fibonacci sequence with a recursive function.

While this function may look simple because I was able to write it in one line using a ES6 arrow function syntax and a ternary operator it's actually the most in efficient way to time wise to solve this problem. It can be solved using iteration or using the Underscore.js library _.memoize function and bring it down to O(n).

O(log n) or Logarithmic time

Now this is one that really through me for a loop when I was first trying to wrap my head around it. The easiest way I think to describe it is when searching for something in a particular data set in each iteration you cut the amount of data down by half. This would require that your data was sorted to begin with then you would start in the middle and if the value you were looking for was larger than the middle value then you could eliminate the first half or vise versa and so on and so forth. So i'm sure you can imagine this would be extremely useful for a very large amount of data. This is known as a Binary Search.