Data Structures and Algorithms Syllabus
1. Introduction
- Overview of DSA
- Importance of algorithms and data structures
- Time and Space Complexity
- Big-O, Big-Ω, Big-Θ Notations
2. Arrays and Strings
- Introduction to arrays
- 1D and 2D arrays
- String operations
- Sliding Window Technique
- Two Pointer Technique
3. Linked Lists
- Singly Linked List
- Doubly Linked List
- Circular Linked List
- Operations: Insert, Delete, Traverse
- Reverse a Linked List
4. Stacks and Queues
- Stack: Implementation, Applications (Balanced Parentheses, Infix/Postfix)
- Queue: FIFO Concept
- Circular Queue
- Deque (Double-ended queue)
- Priority Queue
5. Recursion and Backtracking
- Basic Recursion
- Recursion Tree
- Tail Recursion
- Backtracking Problems (N-Queens, Sudoku Solver, etc.)
6. Searching and Sorting Algorithms
- Linear Search
- Binary Search
- Sorting Algorithms:
- Bubble Sort
- Selection Sort
- Insertion Sort
- Merge Sort
- Quick Sort
- Heap Sort
- Counting Sort
- Sorting Complexities
7. Hashing
- Hash Tables and Hash Maps
- Collision Handling (Chaining, Open Addressing)
- Applications (Anagrams, Frequency Counters)
8. Trees
- Binary Tree and Binary Search Tree (BST)
- Tree Traversals: Inorder, Preorder, Postorder
- AVL Tree (Self-Balancing BST)
- Heap and Priority Queue
- Trie (Prefix Tree)
9. Graphs
- Representations: Adjacency List, Matrix
- BFS (Breadth-First Search)
- DFS (Depth-First Search)
- Topological Sort
- Shortest Path (Dijkstra, Bellman-Ford)
- Minimum Spanning Tree (Prim’s, Kruskal’s)
10. Dynamic Programming (DP)
- Memoization vs Tabulation
- Classical DP Problems:
- Fibonacci
- Longest Common Subsequence
- 0/1 Knapsack
- Matrix Chain Multiplication
11. Greedy Algorithms
- Activity Selection
- Huffman Coding
- Kruskal’s and Prim’s Algorithms (also in Graphs)
12. Bit Manipulation
- AND, OR, XOR, NOT
- Bit Masking
- Counting Set Bits
13. Divide and Conquer
- Concept and Recursion Tree
- Examples: Merge Sort, Quick Sort, Binary Search
14. Advanced Topics (Optional)
- Segment Trees
- Fenwick Trees (Binary Indexed Trees)
- Disjoint Set Union (DSU) / Union-Find
- KMP Algorithm (Pattern Matching)
- Rabin-Karp Algorithm
📚 Resources for Study
- Books:
- “Introduction to Algorithms” by Cormen (CLRS)
- “Data Structures and Algorithms Made Easy” by Narasimha Karumanchi
- Online:
- LeetCode, GeeksforGeeks, HackerRank, Coursera, Udemy
Overview of Data Structures
A Data Structure is a way of organizing, storing, and managing data so it can be used efficiently. It helps you perform operations like searching, insertion, deletion, and updating data in a structured way.
🔍 Why Data Structures Matter
- Optimize performance (speed, memory)
- Solve complex problems efficiently
- Backbone of algorithms and system design
- Essential for coding interviews and real-world applications
Types of Data Structures
1. Primitive Data Structures
- Basic data types:
int
,float
,char
,boolean
2. Non-Primitive Data Structures
🔸 Linear Data Structures (Data stored sequentially)
- Array: Fixed-size, indexed data
- Linked List: Nodes linked with pointers
- Stack: LIFO (Last In, First Out)
- Queue: FIFO (First In, First Out)
🔸 Non-Linear Data Structures (Hierarchical relationships)
- Tree: Hierarchical, parent-child (e.g., Binary Tree, BST, AVL)
- Graph: Nodes connected by edges (e.g., social networks, road maps)
🔸 Hash-based Structures
- Hash Table / Hash Map: Key-value pairs, fast lookup
- Set: Unordered collection with unique elements
⚙️ Operations in Data Structures
- Insertion
- Deletion
- Traversal
- Searching
- Sorting
- Access
🧠 Choosing the Right Data Structure
Problem Type | Recommended Data Structure |
---|---|
Fast lookups | Hash Table, BST |
LIFO/FIFO operations | Stack, Queue |
Frequent insertions/deletions | Linked List |
Hierarchical data | Tree |
Relationships between entities | Graph |
💼 Real-World Applications
- Stacks: Undo features, syntax parsing
- Queues: Print jobs, task scheduling
- Graphs: Maps, network routing
- Trees: File systems, XML parsing
- Hash Maps: Caches, databases
Importance of Algorithms and Data Structures
Understanding data structures and algorithms (DSA) is crucial for writing efficient and scalable code. They are the building blocks of computer science and essential for solving real-world problems effectively.
🔑 Why Data Structures and Algorithms Matter
1. Efficiency and Performance
- Helps write fast and optimized code
- Reduces time and space complexity
- Makes software scalable for large data sets
2. Problem Solving
- Core of competitive programming and tech interviews
- Helps break down problems into manageable steps
- Enables logical and analytical thinking
3. Better Use of Resources
- Efficient memory usage (e.g., using linked lists vs. arrays)
- Reduces CPU and I/O usage
- Helps manage limited hardware effectively
4. Real-World Applications
- Trees → File systems, databases
- Graphs → Social networks, GPS navigation
- Queues → Task scheduling, operating systems
- Hash Maps → Caching, key-value storage
5. Foundational for System Design
- Good algorithms are at the heart of robust systems
- Critical for building search engines, recommendation systems, etc.
6. Crucial for Interviews and Hiring
- Major tech companies (Google, Amazon, Meta, etc.) evaluate DSA knowledge heavily during hiring
- Most coding rounds are DSA-focused
🎯 Examples
- Sorting millions of records? Use Quick Sort or Merge Sort
- Looking for shortest paths? Use Dijkstra’s Algorithm
- Managing undo/redo? Use Stack
- Avoiding duplicates? Use Set or HashMap
✅ Summary
Benefit | Why it Matters |
---|---|
Fast Execution | Improves runtime performance |
Clean Code | Promotes clarity and modular design |
Interview Preparation | Mandatory for tech roles |
Scalable Solutions | Handles large inputs or real-time processing |
✅ How Are Data Structures Used in the Real World?
1. In Software (Coding/Applications)
📦 Arrays, Lists
- Used in: Storing user data (e.g., contact lists, playlists, orders)
- Example: An app stores your music in an array of songs.
📚 Stacks
- Used in: Undo/Redo operations, function calls
- Example: When you undo typing in MS Word or return from a function in code.
🛤️ Queues
- Used in: Task scheduling, printer queues, customer service systems
- Example: Print jobs line up in a queue to be processed in order.
🌲 Trees
- Used in: File systems, compilers, databases
- Example: Your computer’s file explorer is a tree structure.
🌐 Graphs
- Used in: Social networks, maps, internet routing
- Example: Google Maps finds shortest routes using graph algorithms.
🔍 Hash Maps / Hash Tables
- Used in: Caching, database indexing, fast lookups
- Example: Quickly retrieving a user’s profile by ID in an app.
2. In Hardware (Low-Level Use)
🧠 Registers and Memory Blocks
- Data structures like arrays or linked lists are implemented in RAM.
- Stacks are managed at hardware level to track function calls using stack pointers.
🔄 Firmware / Embedded Systems
- Real-time scheduling uses queues
- Device buffers (e.g., keyboard, mouse input) use circular queues
🛠️ CPU and OS Level
- Process scheduling uses queues and heaps
- Memory management uses trees and linked lists
- Page replacement algorithms use stacks and hash tables
3. In Computers (Operating Systems & Systems Software)
- OS uses trees for organizing directories, queues for managing processes.
- Compilers use stacks and trees to evaluate and parse expressions.
- Networking software uses graphs for routing and DNS resolution.
📌 Summary Table
Real-World Component | Data Structure Used | Purpose |
---|---|---|
Web Browser History | Stack | Back/forward navigation |
OS Process Scheduler | Priority Queue / Heap | Managing CPU tasks |
Routing in Networks | Graph | Finding optimal data paths |
Databases | Trees, Hash Maps | Efficient storage and retrieval |
RAM Memory Management | Linked List, Trees | Dynamic allocation, free memory tracking |
⏱️ Time and Space Complexity Explained
Time and Space Complexity are measures used to describe the efficiency of an algorithm — how much time it takes and how much memory it consumes as input size grows.
🕒 1. Time Complexity
Time complexity tells how fast an algorithm runs as the input size increases.
🔹 Common Time Complexities
Complexity | Name | Example |
---|---|---|
O(1) | Constant Time | Accessing array element by index |
O(log n) | Logarithmic Time | Binary Search |
O(n) | Linear Time | Loop through array |
O(n log n) | Log-Linear Time | Merge Sort, Quick Sort (avg case) |
O(n²) | Quadratic Time | Nested loops (e.g., Bubble Sort) |
O(2ⁿ) | Exponential Time | Recursive Fibonacci without DP |
O(n!) | Factorial Time | Solving permutations (e.g., TSP) |
🔸 Best case, Worst case, and Average case complexities matter in some algorithms.
🔍 Example:
jsCopyEdit// O(n): Linear Time
function printItems(arr) {
for (let i = 0; i < arr.length; i++) {
console.log(arr[i]);
}
}
💾 2. Space Complexity
Space complexity measures how much memory (RAM) your algorithm needs in relation to the input size.
Components:
- Input space: Memory used by the inputs
- Auxiliary space: Extra memory used during execution
🔹 Common Space Complexities
Complexity | Example |
---|---|
O(1) | Swapping two variables |
O(n) | Storing a copy of an array |
O(n²) | 2D Matrix storage (e.g., DP table) |
Example:
jsCopyEdit// O(n) Space: Creates a new array
function copyArray(arr) {
let newArr = [];
for (let i = 0; i < arr.length; i++) {
newArr.push(arr[i]);
}
return newArr;
}
⚖️ Why It Matters
- Helps choose more efficient algorithms
- Essential in system design and optimization
- Saves time and resources, especially on large inputs or limited hardware
What Are Big-O, Big-Ω, and Big-Θ?
They are asymptotic notations used to analyze time or space complexity of an algorithm as input size grows towards infinity.
🔸 1. Big-O Notation (O) – Worst Case
- Describes the upper bound (maximum time an algorithm could take).
- Guarantees that the algorithm will never be slower than this.
- Most commonly used in interviews and practical code analysis.
📌 Example:
jsCopyEditfunction findItem(arr, target) {
for (let item of arr) {
if (item === target) return true;
}
return false;
}
- Worst case: Target is at the end or not present → O(n)
🔸 2. Big-Ω Notation (Ω) – Best Case
- Describes the lower bound (minimum time it takes in the best scenario).
- Shows that an algorithm will at least be this fast.
📌 Example (same function):
- Best case: Target is the first item → Ω(1)
🔸 3. Big-Θ Notation (Θ) – Average or Tight Bound
- Describes the exact bound (both upper and lower are the same).
- Means the algorithm always takes this amount of time regardless of case.
📌 Example:
If an algorithm takes time proportional to n
in all cases:
- Θ(n)
📊 Summary Table
Notation | Describes | Represents | Use Case |
---|---|---|---|
O(f(n)) | Worst-case time | Upper bound | Ensure no slow downs |
Ω(f(n)) | Best-case time | Lower bound | Theoretical analysis |
Θ(f(n)) | Exact time | Tight bound (avg/worst/best) | Ideal, precise estimate |
🔁 Visual Example
Let’s say we have an algorithm whose runtime:
- Best Case: 1 step
- Average Case: 50 steps
- Worst Case: 100 steps
Then:
- Ω(1) — minimum it could do
- Θ(n) — average it usually takes
- O(n) — maximum it might take
📚 Introduction to Arrays
An array is a linear data structure that stores a fixed-size sequence of elements of the same data type. Each element in the array is stored at a specific index, making it easy to access any item directly.
🔹 Key Features of Arrays
- Indexed: Each element is identified by an index (starting at 0)
- Fixed Size: Size is defined when the array is created
- Homogeneous: All elements are of the same type (e.g., all integers)
- Contiguous Memory: Stored in continuous memory locations
📦 Example (in JavaScript):
jsCopyEditlet fruits = ["Apple", "Banana", "Mango"];
console.log(fruits[0]); // Output: Apple
🔧 Common Operations
Operation | Description | Time Complexity |
---|---|---|
Access (index) | Retrieve element by index | O(1) |
Insertion | Add element at end (if space) | O(1) |
Deletion | Remove element (shift needed) | O(n) |
Search | Find element by value | O(n) |
Update | Change value at specific index | O(1) |
🌍 Real-World Analogy
Think of an array like a row of mailboxes:
- Each box (index) holds a letter (value).
- You can open a specific box directly (constant-time access).
- But adding a box in between others means shifting everything.
📘 1D and 2D Arrays – Explained Simply
Arrays can be one-dimensional (1D) or two-dimensional (2D), depending on how the data is structured.
🔹 1D Array (One-Dimensional)
A 1D array is a single row of elements arranged linearly — think of it like a simple list.
📦 Example:
jsCopyEditlet marks = [85, 90, 76, 88, 95];
console.log(marks[2]); // Output: 76
📌 Key Points:
- Accessed using one index:
array[index]
- Stores data in a single line
- Good for storing lists, scores, or IDs
🔸 2D Array (Two-Dimensional)
A 2D array is like a grid or matrix, with rows and columns — a table of data.
📦 Example:
jsCopyEditlet matrix = [
[1, 2, 3],
[4, 5, 6],
[7, 8, 9]
];
console.log(matrix[1][2]); // Output: 6
📌 Key Points:
- Accessed using two indices:
array[row][column]
- Useful for:
- Matrices
- Board games (e.g., tic-tac-toe)
- Image pixels
📊 Visual Comparison
Feature | 1D Array | 2D Array |
---|---|---|
Structure | Linear | Table/grid-like |
Access | arr[i] | arr[i][j] |
Example Use | List of numbers | Chessboard, spreadsheet, matrix |
🔗 Linked Lists
A linked list is a linear data structure where elements (called nodes) are connected using pointers.
Each node contains:
- Data
- A reference (pointer) to the next node
🔹 1. Singly Linked List
- Each node points to the next node
- Last node’s
next
isnull
📌 Structure:
cssCopyEdit[10] → [20] → [30] → null
🔧 Basic Node in JavaScript:
jsCopyEditclass Node {
constructor(data) {
this.data = data;
this.next = null;
}
}
🔸 2. Doubly Linked List
- Each node has two pointers:
next
→ to next nodeprev
→ to previous node
📌 Structure:
cssCopyEditnull ← [10] ⇄ [20] ⇄ [30] → null
🔁 3. Circular Linked List
- Last node links back to the first node instead of
null
- Can be singly or doubly circular
📌 Structure:
cssCopyEdit[10] → [20] → [30] → (back to 10)
🛠️ Common Operations
✅ Insert
- Add at the beginning, middle, or end
- Requires pointer adjustment
❌ Delete
- Remove node by value or position
- Requires reconnecting neighboring nodes
🔁 Traverse
- Move through list from head to tail (or circularly)
jsCopyEditfunction traverse(head) {
let temp = head;
while (temp !== null) {
console.log(temp.data);
temp = temp.next;
}
}
🔄 Reverse a Singly Linked List
🚀 Logic:
- Flip the direction of the
next
pointers
✅ JavaScript Example:
jsCopyEditfunction reverseList(head) {
let prev = null;
let current = head;
while (current !== null) {
let next = current.next;
current.next = prev;
prev = current;
current = next;
}
return prev;
}
🔍 Summary Table
Type | Forward | Backward | Circular | Memory Use |
---|---|---|---|---|
Singly Linked | ✅ | ❌ | Optional | Low |
Doubly Linked | ✅ | ✅ | Optional | Medium |
Circular Linked | ✅ | ❌ / ✅ | ✅ | Medium |
📚 Stacks and Queues in Data Structures
Stacks and Queues are linear data structures used to store and manage data in a specific order.
🔺 Stack
A stack is a Last-In-First-Out (LIFO) data structure.
✅ Key Operations:
push(element)
→ Add to the toppop()
→ Remove from the toppeek()
→ View the top elementisEmpty()
→ Check if the stack is empty
📦 Example (JavaScript):
jsCopyEditlet stack = [];
stack.push(10); // [10]
stack.push(20); // [10, 20]
stack.pop(); // 20 removed, [10]
📌 Applications of Stack:
1. Balanced Parentheses
jsCopyEditfunction isBalanced(expr) {
let stack = [];
for (let ch of expr) {
if (ch === '(') stack.push(ch);
else if (ch === ')') {
if (stack.length === 0) return false;
stack.pop();
}
}
return stack.length === 0;
}
2. Infix to Postfix Conversion
- Infix:
A + B
- Postfix:
AB+
Using stack for operators ensures precedence handling.
3. Undo Mechanism in editors (last change undone first)
🔻 Queue
A queue is a First-In-First-Out (FIFO) data structure.
✅ Key Operations:
enqueue(element)
→ Add to reardequeue()
→ Remove from the frontpeek()
→ View front elementisEmpty()
📦 Example (JavaScript):
jsCopyEditlet queue = [];
queue.push(1); // [1]
queue.push(2); // [1, 2]
queue.shift(); // 1 removed, [2]
📌 Applications of Queue:
- Print queue
- Task scheduling (OS)
- BFS traversal in graphs
- Call center / Service lines
🔄 Circular Queue
A circular queue wraps around to use available space efficiently.
✅ Key Concept:
- When
rear
reaches the end, it starts from 0 if space exists
📦 Example (Visual):
scssCopyEdit[ , , 3, 4, 5]
↑ ↑
front rear
After enqueue(1):
[1, , 3, 4, 5] (wrap-around)
Useful in memory management, buffering, multithreading tasks.
🔁 Deque (Double-Ended Queue)
You can insert and remove elements from both ends (front and rear).
✅ Operations:
insertFront()
,insertRear()
deleteFront()
,deleteRear()
Used in:
- Sliding window problems
- Palindrome checking
- Implementing stacks/queues more flexibly
🥇 Priority Queue
A queue where each element has a priority, and the element with the highest priority is served first.
📦 Example (JavaScript using array):
jsCopyEditlet pq = [];
pq.push({ task: 'A', priority: 2 });
pq.push({ task: 'B', priority: 1 });
pq.sort((a, b) => a.priority - b.priority);
console.log(pq.shift()); // { task: 'B', priority: 1 }
Used in:
- Job scheduling
- Dijkstra’s algorithm
- AI pathfinding
🧠 Summary Table
Structure | Principle | Add | Remove | Applications |
---|---|---|---|---|
Stack | LIFO | Top | Top | Expression parsing, Undo, Call stack |
Queue | FIFO | Rear | Front | OS scheduling, BFS, print queues |
Circular Queue | FIFO+wrap | Rear | Front | Buffering, Network data, OS queues |
Deque | Both ends | Front/Rear | Front/Rear | Palindromes, sliding windows |
Priority Queue | Priority | Any | By Priority | Pathfinding, Job queues, Heaps |
🔍 Searching Algorithms
✅ 1. Linear Search
Search each element one by one until you find the target.
Code (JS):
jsCopyEditfunction linearSearch(arr, target) {
for (let i = 0; i < arr.length; i++) {
if (arr[i] === target) return i;
}
return -1;
}
console.log(linearSearch([5, 3, 7, 2], 7)); // Output: 2
- Time Complexity: O(n)
- Best Use Case: Unsorted or small datasets
✅ 2. Binary Search
Works on sorted arrays. Repeatedly divide the search range in half.
Code (JS):
jsCopyEditfunction binarySearch(arr, target) {
let left = 0, right = arr.length - 1;
while (left <= right) {
let mid = Math.floor((left + right) / 2);
if (arr[mid] === target) return mid;
if (arr[mid] < target) left = mid + 1;
else right = mid - 1;
}
return -1;
}
console.log(binarySearch([2, 3, 5, 7], 5)); // Output: 2
- Time Complexity: O(log n)
- Best Use Case: Large, sorted datasets
🔃 Sorting Algorithms
1️⃣ Bubble Sort
Repetitively swap adjacent elements if they are in the wrong order.
Code:
jsCopyEditfunction bubbleSort(arr) {
let n = arr.length;
for (let i = 0; i < n - 1; i++) {
for (let j = 0; j < n - 1 - i; j++) {
if (arr[j] > arr[j + 1]) {
[arr[j], arr[j + 1]] = [arr[j + 1], arr[j]];
}
}
}
return arr;
}
- Time Complexity: O(n²)
- Best Case (sorted): O(n)
- Simple but inefficient
2️⃣ Selection Sort
Select the minimum element and place it at the beginning.
Code:
jsCopyEditfunction selectionSort(arr) {
let n = arr.length;
for (let i = 0; i < n - 1; i++) {
let minIdx = i;
for (let j = i + 1; j < n; j++) {
if (arr[j] < arr[minIdx]) minIdx = j;
}
[arr[i], arr[minIdx]] = [arr[minIdx], arr[i]];
}
return arr;
}
- Time Complexity: O(n²)
- Useful when swaps are costly
3️⃣ Insertion Sort
Insert each element into its correct position in the sorted part.
Code:
jsCopyEditfunction insertionSort(arr) {
for (let i = 1; i < arr.length; i++) {
let key = arr[i], j = i - 1;
while (j >= 0 && arr[j] > key) {
arr[j + 1] = arr[j];
j--;
}
arr[j + 1] = key;
}
return arr;
}
- Time Complexity: O(n²)
- Best Case (nearly sorted): O(n)
4️⃣ Merge Sort
Divide the array into halves, sort recursively, and merge.
Code:
jsCopyEditfunction mergeSort(arr) {
if (arr.length <= 1) return arr;
let mid = Math.floor(arr.length / 2);
let left = mergeSort(arr.slice(0, mid));
let right = mergeSort(arr.slice(mid));
return merge(left, right);
}
function merge(left, right) {
let result = [], i = 0, j = 0;
while (i < left.length && j < right.length) {
result.push(left[i] < right[j] ? left[i++] : right[j++]);
}
return result.concat(left.slice(i)).concat(right.slice(j));
}
- Time Complexity: O(n log n)
- Stable Sort (keeps equal items in original order)
5️⃣ Quick Sort
Pick a pivot, partition the array, and recursively sort partitions.
Code:
jsCopyEditfunction quickSort(arr) {
if (arr.length <= 1) return arr;
let pivot = arr[arr.length - 1];
let left = [], right = [];
for (let i = 0; i < arr.length - 1; i++) {
arr[i] < pivot ? left.push(arr[i]) : right.push(arr[i]);
}
return [...quickSort(left), pivot, ...quickSort(right)];
}
- Time Complexity: O(n log n) average, O(n²) worst
- In-place version is more memory-efficient
6️⃣ Heap Sort
Uses a binary heap to sort elements.
Steps:
- Build a max heap
- Swap root with last element
- Heapify the reduced heap
Time Complexity: O(n log n)
7️⃣ Counting Sort
Count the occurrences of elements and reconstruct the sorted array.
Code:
jsCopyEditfunction countingSort(arr, max) {
let count = new Array(max + 1).fill(0);
for (let num of arr) count[num]++;
let result = [];
for (let i = 0; i <= max; i++) {
while (count[i]-- > 0) result.push(i);
}
return result;
}
console.log(countingSort([4, 2, 2, 8, 3], 8)); // [2, 2, 3, 4, 8]
- Time Complexity: O(n + k), where k = max element
- Best for small integers / fixed range
📊 Sorting Algorithms Comparison
Algorithm | Time Complexity (Avg) | Best | Worst | Stable | Use Case |
---|---|---|---|---|---|
Bubble Sort | O(n²) | O(n) | O(n²) | ✅ | Simple, educational |
Selection Sort | O(n²) | O(n²) | O(n²) | ❌ | When memory writes are costly |
Insertion Sort | O(n²) | O(n) | O(n²) | ✅ | Nearly sorted small arrays |
Merge Sort | O(n log n) | O(n log n) | O(n log n) | ✅ | Large datasets, stable sort |
Quick Sort | O(n log n) | O(n log n) | O(n²) | ❌ | Fastest on average (in-place) |
Heap Sort | O(n log n) | O(n log n) | O(n log n) | ❌ | Memory-constrained environments |
Counting Sort | O(n + k) | O(n + k) | O(n + k) | ✅ | Small ranges, integers |
🔑 What is Hashing?
Hashing is a technique to map data (like a string or number) to a fixed-size value using a hash function. This value (called a hash) is used to store and retrieve data quickly in constant time (O(1)) on average.
Think of it as a smart address finder — give it a name (like “apple”), and it gives you an address (like slot 3 in memory) where that data is stored.
📦 What is a Hash Table (or Hash Map)?
A hash table is a data structure that:
- Uses a hash function to compute an index into an array.
- At that index, it stores key-value pairs.
Example:
jsCopyEditlet hashMap = {
"apple": 10,
"banana": 5,
"grape": 20
};
console.log(hashMap["banana"]); // Output: 5
⚠️ Problem: What if 2 keys hash to the same index?
This is called a collision.
🔧 Collision Handling Techniques
1. Chaining (Separate Chaining)
Each index in the hash table stores a linked list (or array) of key-value pairs.
Example:
jsCopyEditIndex 3: [ ["apple", 10], ["grape", 20] ]
So if both “apple” and “grape” hash to index 3, we just store them in a list at that index.
JS-style Implementation:
jsCopyEditclass HashTable {
constructor(size) {
this.table = new Array(size);
}
_hash(key) {
let hash = 0;
for (let char of key) hash += char.charCodeAt(0);
return hash % this.table.length;
}
set(key, value) {
const index = this._hash(key);
if (!this.table[index]) this.table[index] = [];
this.table[index].push([key, value]);
}
get(key) {
const index = this._hash(key);
const bucket = this.table[index];
for (let pair of bucket) {
if (pair[0] === key) return pair[1];
}
return undefined;
}
}
2. Open Addressing
Instead of using lists, it searches for the next available slot when a collision happens.
Techniques under open addressing:
- Linear Probing: Check next cell one by one.
- Quadratic Probing: Check i² steps away.
- Double Hashing: Use a second hash function to compute next cell.
Linear Probing Example:
cppCopyEditIndex 5: full → try 6 → try 7 → place it
🎯 Real-World Applications of Hash Tables
📘 1. Frequency Counter
Count the number of times each word appears.
jsCopyEditfunction frequencyCounter(str) {
let freq = {};
for (let char of str) {
freq[char] = (freq[char] || 0) + 1;
}
return freq;
}
console.log(frequencyCounter("banana"));
// Output: { b: 1, a: 3, n: 2 }
🔠 2. Anagram Checker
Check if two strings are anagrams (same letters, different order).
jsCopyEditfunction isAnagram(str1, str2) {
if (str1.length !== str2.length) return false;
let count = {};
for (let char of str1) count[char] = (count[char] || 0) + 1;
for (let char of str2) {
if (!count[char]) return false;
count[char]--;
}
return true;
}
console.log(isAnagram("listen", "silent")); // true
🧠 3. Caching and Lookups
- Used in web browsers, CPU caches, DNS resolution
- Store results to avoid recalculating
🧮 4. Data Deduplication
Check if a value has been seen before.
jsCopyEditlet seen = {};
for (let item of data) {
if (seen[item]) skip();
else seen[item] = true;
}
⚖️ Time and Space Complexity
Operation | Average Case | Worst Case (poor hash) |
---|---|---|
Insert | O(1) | O(n) |
Search | O(1) | O(n) |
Delete | O(1) | O(n) |
🧪 Visual Example
Imagine a hash function turns:
"apple"
→ 2"banana"
→ 5"grape"
→ 2 (collision with apple)
With chaining, index 2 becomes:
cssCopyEditIndex 2 → [ ["apple", 10], ["grape", 20] ]
✅ Summary
Concept | Use |
---|---|
Hash Tables | Key-value storage, fast access |
Hash Maps in JS | Objects or Map() |
Collision Handling | Chaining or Open Addressing |
Use Cases | Caching, frequency counters, anagrams |
🌳 1. Binary Tree & Binary Search Tree (BST)
🟩 Binary Tree
A binary tree is a hierarchical data structure in which each node has at most two children, commonly referred to as the left and right child.
cssCopyEdit A
/ \
B C
🟨 Binary Search Tree (BST)
A BST is a special kind of binary tree where:
- All left descendants have values less than the node.
- All right descendants have values greater than the node.
markdownCopyEdit 8
/ \
3 10
/ \ \
1 6 14
✅ Use Cases:
- Searching and sorting
- Databases (indexes)
- Range queries
🔄 2. Tree Traversals
Traversal = Visiting all nodes in a specific order.
📘 Inorder (Left → Node → Right)
Used in BSTs to get sorted values.
Example:
plaintextCopyEdit 4
/ \
2 5
/ \
1 3
Inorder: 1, 2, 3, 4, 5
📗 Preorder (Node → Left → Right)
Used to clone a tree or prefix expression.
📕 Postorder (Left → Right → Node)
Used for deleting the tree or postfix expression.
jsCopyEditfunction inorder(root) {
if (!root) return;
inorder(root.left);
console.log(root.val);
inorder(root.right);
}
⚖️ 3. AVL Tree (Self-Balancing BST)
An AVL Tree is a BST that automatically balances itself after insertions/deletions to maintain optimal performance.
✅ Rule:
- The balance factor of each node is
|height(left) - height(right)| ≤ 1
🔄 Rotations are used to fix imbalances:
- Left Rotation
- Right Rotation
- Left-Right / Right-Left Rotation
⚙️ Benefit:
- Guarantees O(log n) time for insert, delete, search.
⛰ 4. Heap & Priority Queue
🔹 Heap
A binary heap is a complete binary tree:
- Max-Heap: Parent ≥ children
- Min-Heap: Parent ≤ children
makefileCopyEditMax-Heap:
10
/ \
5 3
Min-Heap:
1
/ \
4 2
💼 Priority Queue
It’s a queue where each element has a priority. The highest (or lowest) priority element is served first.
✅ Use Cases:
- CPU Scheduling
- Dijkstra’s Algorithm
- Event Management systems
In JS (using a min-heap via class
):
jsCopyEditclass MinHeap {
constructor() {
this.heap = [];
}
insert(val) {
this.heap.push(val);
this.bubbleUp();
}
bubbleUp() {
let idx = this.heap.length - 1;
while (idx > 0) {
let parent = Math.floor((idx - 1) / 2);
if (this.heap[parent] <= this.heap[idx]) break;
[this.heap[parent], this.heap[idx]] = [this.heap[idx], this.heap[parent]];
idx = parent;
}
}
}
📚 5. Trie (Prefix Tree)
A Trie is a special tree used to store strings efficiently — especially for prefix-based queries.
🔠 Structure:
Each node represents a character. Paths from root to leaf represent words.
Example: Insert “apple” and “app”
markdownCopyEdit (root)
/ \
a b ...
|
p
|
p
|
l
|
e
✅ Use Cases:
- Auto-complete
- Spell-check
- IP routing
- Word games
JS Example (basic insert & search):
jsCopyEditclass TrieNode {
constructor() {
this.children = {};
this.endOfWord = false;
}
}
class Trie {
constructor() {
this.root = new TrieNode();
}
insert(word) {
let node = this.root;
for (let char of word) {
if (!node.children[char]) node.children[char] = new TrieNode();
node = node.children[char];
}
node.endOfWord = true;
}
search(word) {
let node = this.root;
for (let char of word) {
if (!node.children[char]) return false;
node = node.children[char];
}
return node.endOfWord;
}
}
📊 Summary Comparison Table:
Structure | Key Feature | Use Case |
---|---|---|
Binary Tree | Max 2 children | Base for many structures |
BST | Sorted binary tree | Fast lookup, insert, delete |
AVL Tree | Self-balancing BST | Guaranteed O(log n) operations |
Heap | Parent > or < children | Priority queues, heapsort |
Priority Queue | Serve high-priority first | Scheduling, algorithms |
Trie | Prefix-based string structure | Autocomplete, search |
What is a Graph?
A graph is a collection of nodes (vertices) connected by edges. It can be:
- Directed or Undirected
- Weighted or Unweighted
- Cyclic or Acyclic
Real-World Examples:
- Google Maps (cities = nodes, roads = edges)
- Social Networks (users = nodes, friendships = edges)
- Web Crawling
- Flight Routes
📋 Representations of Graphs
1️⃣ Adjacency List
Stores a list of all neighbors for each node.
jsCopyEdit{
A: ['B', 'C'],
B: ['D'],
C: ['E'],
D: [],
E: []
}
✅ Efficient for sparse graphs (fewer edges).
2️⃣ Adjacency Matrix
2D array; matrix[i][j] = 1 if there’s an edge from i to j.
cssCopyEdit A B C D E
A 0 1 1 0 0
B 0 0 0 1 0
...
✅ Faster edge lookup but uses more space.
🔁 Traversal Algorithms
🔹 Breadth-First Search (BFS)
- Explores level-by-level (uses a queue).
- Good for shortest paths in unweighted graphs.
jsCopyEditfunction bfs(graph, start) {
const visited = new Set();
const queue = [start];
while (queue.length) {
const node = queue.shift();
if (!visited.has(node)) {
console.log(node);
visited.add(node);
queue.push(...graph[node]);
}
}
}
🔸 Depth-First Search (DFS)
- Explores as far as possible down each branch (uses a stack or recursion).
jsCopyEditfunction dfs(graph, start, visited = new Set()) {
if (!visited.has(start)) {
console.log(start);
visited.add(start);
for (let neighbor of graph[start]) {
dfs(graph, neighbor, visited);
}
}
}
🧮 Topological Sort (For Directed Acyclic Graphs)
Used to order tasks where one must happen before another (e.g., build systems, course prerequisites).
Rule: For edge (u → v), u
must appear before v
.
Kahn’s Algorithm (BFS based):
- Compute in-degrees.
- Add nodes with in-degree 0 to queue.
- Remove nodes and reduce neighbors’ in-degrees.
🗺️ Shortest Path Algorithms
📍 1. Dijkstra’s Algorithm
- Weighted Graphs with Non-negative weights
- Uses a min-priority queue
Example: Shortest driving route (where each road has a time/cost)
jsCopyEdit// Dijkstra's logic would involve priority queue, distances map
📍 2. Bellman-Ford Algorithm
- Handles negative weights
- Slower than Dijkstra (O(VE))
✅ Can detect negative cycles
🌳 Minimum Spanning Tree (MST)
Used to connect all nodes with minimum total edge weight.
🌐 1. Prim’s Algorithm
- Starts with one node and adds the smallest edge connecting to the tree.
- Uses a priority queue.
🌐 2. Kruskal’s Algorithm
- Sorts all edges by weight.
- Adds edges without forming cycles using Disjoint Set Union (DSU).
🧠 Applications of MST:
- Network design (fiber/cable networks)
- Cluster analysis
- Civil planning
📝 Summary Table
Algorithm/Concept | Key Idea | Use Case |
---|---|---|
Adjacency List | Map node → neighbors | Memory efficient |
Adjacency Matrix | 2D edge presence | Fast lookup |
BFS | Level-wise traversal | Shortest path in unweighted graph |
DFS | Deepest path first | Cycle detection, pathfinding |
Topological Sort | Linear order in DAGs | Task scheduling |
Dijkstra | Greedy for shortest path | GPS routing |
Bellman-Ford | Handles negative weights | Currency arbitrage, graphs w/ neg |
Prim’s | Greedy MST from 1 node | Build minimum cost networks |
Kruskal’s | Greedy MST by edge sorting | Cluster data |