Through our lectures we learned many algorithms to sort a Python list and which of these algorithms were more efficient when compared to each other. We compared the algorithms using Big-O Notation which counts the number of steps that get executed when trying to sort a list of n items. One of the sorting algorithms we looked at was Selection Sort:
1 | def selectionSort(alist): |
2 | for fillslot in range(len(alist)-1,0,-1): |
3 | positionOfMax=0 |
4 | for location in range(1,fillslot+1): |
5 | if alist[location]>alist[positionOfMax]: |
6 | positionOfMax = location |
7 | |
8 | temp = alist[fillslot] |
9 | alist[fillslot] = alist[positionOfMax] |
10 | alist[positionOfMax] = temp |
11 | |
12 | alist = [54,26,93,17,77,31,44,55,20] |
13 | selectionSort(alist) |
14 | print(alist) |
From the code above, we say that the selection sort algorithm is O(n^2) because when sorting a list of n elements, the outer for loop runs n times as with the inner for loop. This runtime complexity is quite inefficient when compared to the Quick Sort or Merge Sort algorithms which have O(nlogn) complexity on average.
My goal when it comes to sorting is to learn the complexities of the worst and best cases of the algorithms we've learned, and most importantly, to understand why it has that specific complexity. To achieve this I will practice tracing the code for the algorithms, trying to notice the lines of code that get executed the most (such as the nested for loop in the example above).
My goal when it comes to sorting is to learn the complexities of the worst and best cases of the algorithms we've learned, and most importantly, to understand why it has that specific complexity. To achieve this I will practice tracing the code for the algorithms, trying to notice the lines of code that get executed the most (such as the nested for loop in the example above).