
Top 10 Python Algorithms with Time Complexity ✔
Top 10 Python Algorithms with Time Complexity Explained
Python is one of the most widely used programming languages today. It is popular for web development, data science, machine learning, automation, and system design. Python provides powerful libraries and tools that make implementing algorithms straightforward.
However, writing code is only half the job. How efficiently that code runs is just as important. This is where time complexity comes in—it helps us understand how an algorithm performs as the input size grows.
In this article, we’ll explore the top 10 commonly used Python algorithms, explain how they work, and analyze their time complexity with simple examples.
1. Bubble Sort
Bubble Sort is one of the simplest sorting algorithms. It repeatedly compares adjacent elements and swaps them if they are in the wrong order. With each pass, the largest element “bubbles” to the end of the list.
Although easy to understand, Bubble Sort is not efficient for large datasets.
Time Complexity:
-
Worst & Average Case: O(n²)
-
Best Case (already sorted): O(n)
Example Code:
def bubble_sort(arr):
n = len(arr)
for i in range(n - 1):
for j in range(0, n - i - 1):
if arr[j] > arr[j + 1]:
arr[j], arr[j + 1] = arr[j + 1], arr[j]
return arr
2. Selection Sort
Selection Sort works by repeatedly finding the smallest element from the unsorted portion of the list and placing it at the beginning.
This algorithm performs fewer swaps than Bubble Sort but still has poor performance for large inputs.
Time Complexity:
-
Best, Average, Worst Case: O(n²)
Example Code:
def selection_sort(arr):
n = len(arr)
for i in range(n - 1):
min_idx = i
for j in range(i + 1, n):
if arr[j] < arr[min_idx]:
min_idx = j
arr[i], arr[min_idx] = arr[min_idx], arr[i]
return arr
3. Insertion Sort
Insertion Sort builds the final sorted list one element at a time. It works well for small datasets or nearly sorted lists.
It is commonly used in real-world systems as part of more complex sorting algorithms.
Time Complexity:
-
Worst & Average Case: O(n²)
-
Best Case: O(n)
Example Code:
def insertion_sort(arr):
n = len(arr)
for i in range(1, n):
key = arr[i]
j = i - 1
while j >= 0 and key < arr[j]:
arr[j + 1] = arr[j]
j -= 1
arr[j + 1] = key
return arr
4. Merge Sort
Merge Sort uses a divide-and-conquer approach. The list is split into smaller parts, each part is sorted, and then the sorted parts are merged together.
This algorithm is very efficient and commonly used in real-world applications.
Time Complexity:
-
Best, Average, Worst Case: O(n log n)
Example Code:
def merge_sort(arr):
if len(arr) <= 1:
return arr
mid = len(arr) // 2
left = merge_sort(arr[:mid])
right = merge_sort(arr[mid:])
return merge(left, right)
def merge(left, right):
result = []
i = j = 0
while i < len(left) and j < len(right):
if left[i] < right[j]:
result.append(left[i])
i += 1
else:
result.append(right[j])
j += 1
result.extend(left[i:])
result.extend(right[j:])
return result
5. Quick Sort
Quick Sort also follows the divide-and-conquer strategy. It selects a pivot element, partitions the list into smaller and larger elements, and recursively sorts them.
It is very fast in practice but can degrade in worst-case scenarios.
Time Complexity:
-
Average Case: O(n log n)
-
Worst Case: O(n²)
Example Code:
def quick_sort(arr):
if len(arr) <= 1:
return arr
pivot = arr[0]
less = [x for x in arr[1:] if x <= pivot]
greater = [x for x in arr[1:] if x > pivot]
return quick_sort(less) + [pivot] + quick_sort(greater)
6. Binary Search
Binary Search is an efficient searching algorithm that works only on sorted lists. It repeatedly divides the search space in half.
This is one of the fastest searching techniques available.
Time Complexity:
-
Best Case: O(1)
-
Worst & Average Case: O(log n)
Example Code:
def binary_search(arr, target):
low, high = 0, len(arr) - 1
while low <= high:
mid = (low + high) // 2
if arr[mid] == target:
return mid
elif arr[mid] < target:
low = mid + 1
else:
high = mid - 1
return -1
7. Depth-First Search (DFS)
DFS is a graph traversal algorithm that explores as far as possible along a path before backtracking. It uses a stack (or recursion).
DFS is useful for tasks like cycle detection and path finding.
Time Complexity:
-
O(n + m)
(n = nodes, m = edges)
Example Code:
def dfs(graph, start):
visited = set()
stack = [start]
while stack:
node = stack.pop()
if node not in visited:
visited.add(node)
for neighbor in graph[node]:
if neighbor not in visited:
stack.append(neighbor)
return visited
8. Breadth-First Search (BFS)
BFS explores all neighboring nodes first before moving deeper. It uses a queue and is ideal for finding the shortest path in unweighted graphs.
Time Complexity:
-
O(n + m)
Example Code:
def bfs(graph, start):
visited = set()
queue = [start]
while queue:
node = queue.pop(0)
if node not in visited:
visited.add(node)
for neighbor in graph[node]:
if neighbor not in visited:
queue.append(neighbor)
return visited
9. Dijkstra’s Algorithm
Dijkstra’s algorithm finds the shortest path from a starting node to all other nodes in a weighted graph.
It uses a priority queue to always expand the shortest known path.
Time Complexity:
-
O(n log n + m)
Example Code:
import heapq
def dijkstra(graph, start):
distances = {node: float('inf') for node in graph}
distances[start] = 0
queue = [(0, start)]
while queue:
current_distance, node = heapq.heappop(queue)
for neighbor in graph[node]:
distance = current_distance + graph[node][neighbor]
if distance < distances[neighbor]:
distances[neighbor] = distance
heapq.heappush(queue, (distance, neighbor))
return distances
10. Topological Sort
Topological Sort orders the nodes of a Directed Acyclic Graph (DAG) so that each node appears before the nodes it points to.
It is commonly used in task scheduling and dependency resolution.
Time Complexity:
-
O(n + m)
Example Code:
def topological_sort(graph):
visited = set()
ordering = []
def visit(node):
if node not in visited:
visited.add(node)
for neighbor in graph[node]:
visit(neighbor)
ordering.append(node)
for node in graph:
visit(node)
return ordering