2.4 Performance of JavaScript Data Structure

Now that you have a general idea of Big-O notation and the differences between the different functions, our goal in this section is to tell you about the Big-O performance for the operations on JavaScript list and dictionaries. I will then show you some timing experiments that illustrate the costs and benefits of using certain operations on each data structure. It is important for you to understand the efficiency of these JavaScript data structures because they are the building blocks we will use as we implement other data structures in the remainder of the book. In this section we are not going to explain why the performance is what it is. In later chapters you will see some possible implementations of both arrays and objects and how the performance depends on the implementation.

2.4.1 List

We have many choices to make when we implemented the list data structure. Each of these choices could have an impact on how fast list operations perform. We can look at the most common use the list data structure and optimize it's implementation so the most common operations are very fast. Of course we also tried to make the less common operations fast, but when a tradeoff had to be made the performance of a less common operation is often sacrificed in favor of the more common operation.

Two common operations are indexing and assigning to an index position. Both of these operations take the same amount of time no matter how large the list becomes. When an operation like this is independent of the size of the list they are $$O(1)$$.

Another very common programming task is to grow a list, this is the append method. As wee will see later, we use an array as list datastore. There are three ways to create a longer array. You can use the push() method the concat() method or the manual assignation. Whatever, the append method is $$O(1)$$. However, the manual assignation is fastest option as we can if we benchmark it. This is the kind of thing important to know because it can help you make your own programs more efficient by choosing the right tool for the job.

var arr = [];

function fn_push(n) {
  var start = process.hrtime();

  for (var i = 0; i < (n + 1); i++) {
    arr.push(n);
  }

  var end = process.hrtime(start);
  end = end[0] * 1000000 + end[1] / 1000;
  console.log(`Computed in ${end} milliseconds`);
}

function fn_push_manually(n) {
  var start = process.hrtime();

  for (var i = 0; i < (n + 1); i++) {
    arr[arr.length] = n;
  }

  var end = process.hrtime(start);
  end = end[0] * 1000000 + end[1] / 1000;
  console.log(`Computed in ${end} milliseconds`);
}

function fn_concat_end(n) {
  var start = process.hrtime();

  for (var i = 0; i < (n + 1); i++) {
    arr.concat([n]);
  }

  var end = process.hrtime(start);
  end = end[0] * 1000000 + end[1] / 1000;
  console.log(`Computed in ${end} milliseconds`);
}

fn_push(1000); // Computed in 240.77 milliseconds
arr = [];
fn_push_manually(1000); // Computed in 47.038 milliseconds
arr = [];
fn_concat_end(1000); // Computed in 195.935 milliseconds

Now that we have seen how performance can be measured concretely you can look at the table below to see the Big-O efficiency of all the basic list operations.

results matching ""

    No results matching ""