Data Structures Spact Time & Complexity Hacks
Observing the time complexity of different algorithms
- Hacks
- - Record your findings when testing the time elapsed of the different algorithms.
- - Although we will go more in depth later, time complexity is a key concept that relates to the different sorting algorithms. Do some basic research on the different types of sorting algorithms and their time complexity.
- - Why is time and space complexity important when choosing an algorithm?
- - Should you always use a constant time algorithm / Should you never use an exponential time algorithm? Explain?
- - What are some general patterns that you noticed to determine each algorithm's time and space complexity?
- Example of constant time algorithm
Hacks
- Record your findings when testing the time elapsed of the different algorithms.
I noticed that my findings when testing the time elapsed of the different algorithms depended on the data inputed. For example we ran code that printed an image. The elapsed time depended on the size of the list.
- Although we will go more in depth later, time complexity is a key concept that relates to the different sorting algorithms. Do some basic research on the different types of sorting algorithms and their time complexity.
I did some basic research and came to the conclusion that there are 8 mains sorting algortithms.Below are their their names and descriptions.
-
Bubble Sort
: -
Selection Sort
-
Insertion Sort
-
Merge Sort
-
Quick Sort
-
Heap Sort
-
Counting Sort
-
Radix Sort
All of the sorting algorithms mentioned earlier aim to arrange a collection of things in ascending or descending order. Each algorithm achieves this goal by following a set of rules and operations to change the process of inputing the data.
- Why is time and space complexity important when choosing an algorithm?
Firstly I will define space and time complexity, they are measures of how much time and memory an algorithm needs to execute considering to the input size.
Time and space complexity is important when choosing an algorithm because it impacts the performance and efficieny of a algorithm and is necessary to calculating the time and memory an algorithm needs to execute.
- Should you always use a constant time algorithm / Should you never use an exponential time algorithm? Explain?
Whether to use a constant time algorithm or an exponential time algorithm varies depending on the specific requirements of the problem you are trying to solve.
A constant time algorithm, which has a time complexity that does not depend on the input size, is generally considered more efficient than an exponential time algorithm. However, not all problems can be solved using constant time algorithms, and in some cases, an exponential time algorithm may be the only option.
- What are some general patterns that you noticed to determine each algorithm's time and space complexity?
A lot of them include the process of using O(n^2)
I researched about O(n^2) and came to the conclusion that this equation is used to describe the time complexity of an algorithm.
def get_first_element(lst):
return lst[0]
No matter how large the input list is, this function will always take the same amount of time to execute because it only performs a single operation (accessing the first element of the list using its index). Therefore, this algorithm has a time complexity of O(1), which means it runs in constant time.
def count_subsets(lst):
if len(lst) == 0:
return 1
else:
item = lst[0]
rest = lst[1:]
count_rest = count_subsets(rest)
return count_rest + count_subsets(rest + [item])
This function takes a list as its input and counts the number of subsets of the list. It does this by recursively calling itself on two smaller sublists: one that excludes the first item of the original list, and one that includes the first item. The function combines the results of these two recursive calls to compute the total number of subsets.