← Back to Dashboard

📋 ASD Exam Cheat Sheet

Quick reference for algorithms, complexities & data structures

📅

Lecture Concept Index

L1-L2: Foundations

  • Correctness, Specification
  • Loop Invariants, Variants
  • Asymptotic Notation (O, Ω, Θ)

L3-L5: Search & Sort

  • Binary Search, QuickSelect
  • O(n²) Sorts (Selection, Insertion)
  • O(n log n) Sorts (Merge, Quick, Heap)
  • Linear Sorts (Count, Radix)

L7-L8: Structures & ADTs

  • Arrays, Linked Lists
  • Dynamic Arrays (Unbounded)
  • Stacks, Queues, Deques
  • Priority Queues, Binary Heaps

L9-L10: Dictionaries & Trees

  • Hash Tables (O(1) avg)
  • Simple Dictionary (Arrays/Lists)
  • BST, AVL Trees, RB Trees
  • Dynamic Ordered Set (Optional)
📚

Correctness of Algorithms

L1

Specification S10

  • Initial condition: Valid input constraints
  • Final condition: Expected output/result

Total Correctness S12

Algorithm is totally correct if:

  • It stops for all valid inputs (Termination)
  • Returns correct output (Partial Correctness)

Formula: $Total = Partial + Termination$

Termination Proof S16

Proven using a Variant Function $v: State \to \mathbb{N}$:

  • Decreasing: $v(S_{i+1}) < v(S_i)$
  • Bounded: $v(S_i) \ge 0$

Partial Correctness

IF algorithm stops → output is correct

(Does NOT guarantee termination)

Loop Invariant S17

Predicate true before & after each iteration.

  • Initialization: True before loop
  • Maintenance: Preserved by iteration
  • Termination: Implies result
⏱️

Complexity Analysis

L2

Dominating Operations S27

Operations whose count is proportional to total work (e.g., comparisons in sorting).

Time Complexity S30

  • W(n) - Worst case (pessimistic)
  • A(n) - Average case
  • B(n) - Best case

Space Complexity

Memory used relative to input size n.

  • In-place: O(1) extra space
  • Not in-place: O(n) or more

Amortized Analysis Methods S135

Method Approach Formal Condition
Aggregate Total cost for a sequence of $m$ operations $A = \text{TotalCost}(m) / m$
Accounting Assign fixed costs (credits); pre-pay cheap ops $\sum \hat{c}_i \ge \sum c_i$
Potential $\Phi$: state → real number; $\Phi(S_0)=0$ $a_i = c_i + \Phi(S_i) - \Phi(S_{i-1})$
📈

Asymptotic Notation S37

L2 ⭐ COMPULSORY
Notation Meaning Limit Test (f/g → ?)
f(n) = O(g(n)) f grows at most as fast as g (upper bound) ≤ constant (including 0)
f(n) = Ω(g(n)) f grows at least as fast as g (lower bound) ≥ constant (including ∞)
f(n) = Θ(g(n)) f grows at same rate as g (tight bound) = positive constant
f(n) = o(g(n)) f grows strictly slower than g = 0
f(n) = ω(g(n)) f grows strictly faster than g = ∞

Complexity Ranking (Growth Hierarchy) S42

O(1) < O(log n) < O(√n) < O(n) < O(n log n) < O(n²) < O(n³) < O(2ⁿ) < O(n!)

Rule: $f(n) = \Theta(g(n)) \iff f(n) = O(g(n)) \text{ AND } f(n) = \Omega(g(n))$

📊 Comparing Function Ranks (Limit Test Method) S41

Limit Test Formula

To compare f(n) vs g(n), compute: lim (n→∞) f(n)/g(n)

Limit Result Meaning Notation
= 0 f grows strictly slower than g f = o(g) and f = O(g)
= c > 0 (constant) f and g grow at same rate f = Θ(g)
= ∞ f grows strictly faster than g f = ω(g) and f = Ω(g)

Example Comparisons

f(n) g(n) Limit f/g Result
n n/n² = 1/n → 0 n = o(n²) ✓ slower
log n n (log n)/n → 0 log n = o(n) ✓ slower
n log n (n log n)/n² → 0 n log n = o(n²)
2ⁿ n! 2ⁿ/n! → 0 2ⁿ = o(n!) ✓ slower
3n² + 5n → 3 Θ(n²) same rate

⚡ Quick Hierarchy Rules

• Polynomial beats logarithm: nε = ω(log n) for any ε > 0
• Exponential beats polynomial: an = ω(nk) for a > 1
• Factorial beats exponential: n! = ω(an)
• Constants don't matter: 5n² = Θ(n²)

📐

Logarithmic Properties

CLRS 🧮 MATH FOUNDATION

Standard Notation (CLRS)

  • $\lg n$ $= \log_2 n$ (Binary log)
  • $\ln n$ $= \log_e n$ (Natural log)
  • $\lg^2 n$ $= (\lg n)^2$
  • $\lg \lg n$ $= \lg(\lg n)$

Basic Properties

For all $a, b, c > 0$:

  • $\log_b xy = \log_b x + \log_b y$
  • $\log_b (x/y) = \log_b x - \log_b y$
  • $\log_b x^a = a \log_b x$
  • $\log_b x = \frac{\log_c x}{\log_c b}$ (Base change)
  • $\log_b (1/x) = -\log_b x$

Crucial Identities ⭐

Often used in Master Theorem & complexity:

  • $a^{\log_b c} = c^{\log_b a}$ (Swap base/arg)
  • $b^{\log_b x} = x$
  • $\log_b b^x = x$
  • $\log_b x = \Theta(\log_c x)$ (All logs same rank)

Inequalities

For $x > -1$:

  • $\frac{x}{1+x} \le \ln(1+x) \le x$
  • For $x=0$, equalities hold.
  • $\lg(n!) = \Theta(n \lg n)$ (Stirling's approx)
  • $x-1 < \lfloor x \rfloor \le x \le \lceil x \rceil < x+1$

Iterated Logarithm ($\log^* n$)

Number of times log must be applied to reach $\le 1$.

  • $\log^* n = 0$ if $n \le 1$
  • $\log^* n = 1 + \log^* (\lg n)$ if $n > 1$
  • Growth: Extremely slow (effectively < 5 for any practical $n$).

⚡ Pro Tip: Calculating Bounds

When you see $2^{\lg n}$, it's just $n$. When you see $n^{\lg 7}$ vs $n^2$, remember $\lg 7 \approx 2.807 \implies n^{\lg 7}$ grows strictly faster than $n^2$.

🔍

Searching Algorithms

L3

1. Sequential Search ⭐ S25 S28 S29 S31 S48

1. Specification:

Pre: Arr sequence, len length, key to find.
Post: Index $i$ s.t. $Arr[i] == key$ OR $-1$.

search(Arr, len, key):
  i = 0
  while(i < len):
    if (Arr[i] == key) return i
    i++
  return -1
2. Analysis:

Dominating operation: comparisons. $W(n) = n$, $A(n) = n/2 \implies \Theta(n)$. $S(n) = O(1)$.

Note: Multiplicative factor is 1 ($W(n)=n$) and cannot be improved due to the specificity of the problem (must check every element in unsorted data).

1.5 Skipping Search (k-skip) S51

1. Fundamental Mechanism:

Requires sorted sequence. Checks every $k$-th cell (skips $k-1$ elements). When a value $> key$ is found, it performs a linear check on the skipped elements.

2. Efficiency & Complexity:
  • Average Speed: Asymptotically $k$ times faster than sequential.
  • Complexity: $W(n) = O(n/k + k)$ because it jumps $n/k$ times and scans at most $k$ elements.
  • Optimal k: $k = \sqrt{n}$ minimizes $n/k + k$, giving $W(n) = \Theta(\sqrt{n})$.
  • Rank: Still linear $\Theta(n)$ for fixed $k$; rank only improves if $k$ scales with $n$.
  • 3. Context:

    Middle ground between basic Sequential Search and Binary Search.

    2. Binary Search ⭐ S52

    Requires sorted sequence for $O(1)$ random access.

    1. Specification:

    Pre: Array $A[0..n-1]$ is non-decreasingly sorted.
    Post: Returns index $i \in [0, n-1]$ s.t. $A[i] = key$ OR $-1$ if $\nexists i: A[i]=key$.

    binSearch(Arr, len, key):
      l = 0, r = len - 1
      while(l <= r):
        m = (l + r) / 2
        if (Arr[m] == key) return m
        else if (Arr[m] > key) r = m - 1
        else l = m + 1
      return -1
    2. Stop Property:

    Sequence length halves each step ($n \to n/2 \to \dots \to 1$).

    3. Complexity:

    $W(n) = \Theta(\log n)$, $A(n) = \Theta(\log n)$, $S(n) = O(1)$.

    3. Tournament Algorithm (2nd Smallest) S57

    Uses a "Divide & Conquer" tournament tree to find min, then scans winners' competitors.

    1. Logic:

    Phase 1: Build tree ($n-1$ comparisons). Root is min.
    Phase 2: Find min among keys that lost directly to the winner ($\log n$ candidates).

    2. Complexity:

    Total Ops: $n + \lceil \log n \rceil - 2$.
    $W(n) = \Theta(n)$ (linear, but fewer comparisons than $2n-3$).

    4. Hoare's Algorithm (QuickSelect) ⭐ S62

    An efficient "Divide & Conquer" approach to finding the k-th positional statistic (k-th smallest element).

    1. Mechanism:
    • Call Partition on the sequence.
    • If the returned pivot index is equal to k $\implies$ Return the element.
    • Else, continue recursively on only one side (left or right) depending on the pivot index relative to k.
    2. Complexity:

    Average: $O(n)$ (Expected linear).
    Worst Case: $\Theta(n^2)$ (occurs with poor pivots).

    3. Advantage:

    Much faster than sorting the entire array ($O(n \log n)$) if you only need a single positional statistic.

    Algorithm W(n) A(n) B(n) Precondition Notes
    Sequential Search O(n) O(n) O(1) None Works on unsorted data
    Skipping Algorithm O(n/k + k) O(n/k + k) O(1) Sorted Optimal k = √n → O(√n)
    Binary Search O(log n) O(log n) O(1) Sorted T(n) = T(n/2) + 1
    Form: lg n + 1
    QuickSelect O(n²) O(n) O(n) None Finds k-th smallest element

    Partition Function ⭐

    Rearranges elements around pivot: left ≤ pivot < right. W(n) = O(n). Used in QuickSort and QuickSelect.

    📊

    Sorting Algorithms

    L4-L5

    1. Selection Sort ⭐ S68

    1. Verbal Description:

    Repeatedly finds the minimum element and swaps it to the front.

    selectionSort(S, len):
      for i from 0 to len-1:
        mini = indexOfMin(S, i, len)
        swap(S, i, mini)
    2. Analysis:

    $W(n) = A(n) = B(n) = \Theta(n^2)$. Exactly $n(n-1)/2$ comparisons. $S(n) = O(1)$. Unstable.

    2. Insertion Sort ⭐ S70

    1. Verbal Description:

    Builds a sorted partition by inserting each element into its correct place.

    insertionSort(arr, len):
      for next from 1 to len-1:
        curr = next; temp = arr[next]
        while(curr > 0 && temp < arr[curr-1]):
          arr[curr] = arr[curr-1]; curr--
        arr[curr] = temp
    2. Analysis:

    $W(n) = \Theta(n^2)$, $B(n) = \Theta(n)$ (adaptive). $S(n) = O(1)$. Stable.

    3. Merge Sort ⭐ S76

    mergeSort(arr, l, r):
      if(l < r):
        m = (l + r) / 2
        mergeSort(arr, l, m); mergeSort(arr, m+1, r)
        merge(arr, l, m, r)
    Analysis:

    $W(n) = \Theta(n \log n)$. $S(n) = O(n)$ (not in-place).

    Data Structure Choice: Using **linked lists** avoids the $O(n)$ space requirement for an auxiliary array, preventing "unacceptably high" space complexity.

    Stable

    4. Comparison Summary

    Metric Best Case $B(n)$ Worst Case $T(n)$
    Recurrence $2B(n/2) + n/2$ $2T(n/2) + n - 1$
    Closed Form ($n = 2^k$) $\frac{n}{2} \lg n$ $n \lg n - n + 1$
    Big-Theta $\Theta(n \lg n)$ $\Theta(n \lg n)$

    4. Partition (Lecture) ⭐ S93

    1. Verbal Description:

    Rearranges elements around a pivot (a[l]). Returns index p where a[l..p-1] ≤ a[p] and a[p+1..r] > a[p].

    partition(a, l, r):
      i = l+1; j = r; m = a[l]
      do:
        while(i < r && a[i] <= m) i++
        while(j > i && a[j] >= m) j--
        if(i < j) swap(a[i], a[j])
      while(i < j)
      swap(a[l], a[j]); return j
    Pivot Properties (after):
    • $a[j] = m$ (pivot in its final sorted position)
    • $a[l..j-1] \le m$ (left side)
    • $a[j+1..r] \ge m$ (right side)

    5. QuickSort ⭐ S93

    quickSort(a, l, r):
      if(l < r):
        p = partition(a, l, r)
        quickSort(a, l, p-1); quickSort(a, p+1, r)
    Analysis:

    $W(n) = \Theta(n^2)$, $A(n) = \Theta(n \log n)$. $S(n) = O(\log n)$. Unstable.

    In-Place Efficiency: QuickSort is inherently efficient because the Partition procedure works in-place (constant auxiliary space for comparison/swapping), making it practical for large datasets.

    6. Binary Heap Foundation ⭐ S150

    siftDown(i):
      l = 2i+1; r = 2i+2; min = i
      if(l <= n && heap[l] < heap[min]) min = l
      if(r <= n && heap[r] < heap[min]) min = r
      if(min != i): swap(i, min); siftDown(min)
    
    buildHeap: for(i = n/2-1; i >= 0; i--) siftDown(i)

    7. HeapSort ⭐ S158

    heapSort(arr, n):
      buildHeap(arr)
      for(i = n-1; i > 0; i--):
        swap(arr[0], arr[i]); siftDown(0)
    Analysis:

    $W(n) = A(n) = \Theta(n \log n)$. $S(n) = O(1)$ (in-place). Unstable.

    8. CountSort ⭐ S106

    countSort(a, n, k):
      // k = magnitude of range [0, k]
      C = new Array(k+1) // Auxiliary space: O(k)
      for(i=0; i<n; i++) C[a[i]]++
      for(i=1; i<=k; i++) C[i] += C[i-1] // cumulative counts
      for(i=n-1; i>=0; i--) B[--C[a[i]]] = a[i] // place stably
    Analysis:

    Time: $W(n) = \Theta(n + k)$. Space: $S(n) = O(k)$ auxiliary (count array) $+ O(n)$ (output array). Stable.

    9. Radix Sort ⭐ S109

    radixSort(a, d):
      for i from 1 to d:
        stablySortByDigit(a, i)
    Analysis:

    $W(n) = \Theta(d(n + k))$. Requires a stable underlying sort. Stable.

    Algorithm W(n) A(n) B(n) Space Stable? Notes
    Selection Sort[S67] O(n²) O(n²) O(n²) O(1) No Find min, swap to front
    Insertion Sort[S72] O(n²) O(n²) O(n) O(1) Yes Best on nearly sorted
    Merge Sort[S78] O(n log n) O(n log n) O(n log n) O(n) Yes Divide & Conquer
    QuickSort [S92] O(n²) O(n log n) O(n log n) O(log n) No Worst on sorted input
    HeapSort [S163] O(n log n) O(n log n) O(n log n) O(1) No In-place, uses heap
    CountSort[S106] O(n + k) O(n + k) O(n + k) O(k) Yes k = range of values
    RadixSort[S108] O(d·n) O(d·n) O(d·n) O(n) Yes d = number of digits

    ⚖️ What is Stability? S91

    A sorting algorithm is stable if it preserves the relative order of elements with equal keys.

    Example: [A1, B, A2] sorted by letter → Stable: [A1, A2, B] | Unstable: [A2, A1, B]

    Iterative Sorting: If an algorithm is stable, it is possible to sort records by multiple attributes in iterations (from least significant to most significant attribute) without destroying the sorting order of the previous iteration.

    Stable Algorithms (Yes)

    • Insertion Sort: Only shifts elements strictly smaller. Equal elements stay in original relative order.
    • Merge Sort: Stable if the merge function prefers the left element in case of a tie (L[i] <= R[j]).
    • CountSort: Processes the input array backwards (right-to-left) to place elements in the output correctly.
    • RadixSort: Depends on the stability of the underlying sort (usually CountSort) to work correctly.

    Unstable Algorithms (No)

    • Selection Sort: The long-distance swap of the minimum element can jump it over an equal element elsewhere.
    • QuickSort: The Partition function swaps elements across the pivot, often mixing up the relative order of identical values.
    • HeapSort: Building the heap and repeatedly extracting the max/min involves many non-local swaps that destroy stability.

    ⚠️ Comparison Sort Lower Bound

    Any comparison-based sort requires $\Omega(n \log n)$ comparisons in the worst case.

    Decision Tree Model:
    • Algorithm follows a binary tree (comparisons = nodes).
    • Number of reachable leaves $L$ must be $\ge n!$ (to cover all input permutations).
    • Tree height $h$ (max comparisons): $2^h \ge L \ge n! \implies h \ge \log_2(n!)$.
    • Using Stirling's Approximation: $\log_2(n!) \approx n \log_2 n - 1.44n \implies \Omega(n \lg n)$.

    Non-comparison algorithms like CountSort bypass this by using values as indices.

    Merge Function ⭐

    Merges two sorted segments. Max comparisons = n + m - 1. Time: O(n+m).

    📦

    Concrete Data Structures

    L7

    ⚖️ Sequence Access Patterns

    Sequences support two basic types of access, with differing efficiency on array implementations:

    • Absolute Access: Location identified by index. Fast on arrays: $O(1)$ constant time.
    • Relative Access: Location identified as successor/predecessor of a given element. Slow on arrays: $\Theta(n)$ linear time (requiring shifts/scans).

    📈 Dynamic Array S137

    A resizing array that grows (doubles) when full. Provides $O(1)$ amortized insertion at the end. Focus: Indexing.

    📐 Dynamic Ordered Set (OPT)

    Extension of Dictionary for linearly ordered keys. Supports order-based operations (min, max, succ, pred). Focus: Order.

    📊 Amortized Analysis

    Averaging cost over a sequence of operations. Focuses on the total cost of $n$ operations.

    Comparison: Dynamic Array vs. Dynamic Ordered Set

    Dynamic Array = "Unbounded Array". Optimized for $O(1)$ access by index and end-insertion. Not necessarily sorted.
    Dynamic Ordered Set = "Sorted Dictionary". Optimized for search and order-based proximity (predecessor/successor). Usually implemented via AVL/RB-Trees ($O(\log n)$).

    Operation Array Dynamic Array Singly Linked List Doubly Linked List
    Access by index O(1) O(1) O(n) O(n)
    Insert at front O(n) O(n) O(1) O(1)
    Insert at end O(1)* O(1)** O(1)*** O(1)
    Delete at front O(n) O(n) O(1) O(1)
    Delete at end O(1) O(1) O(n) O(1)
    Search O(n) O(n) O(n) O(n)

    * Fixed size. ** Amortized cost. *** With tail pointer.

    🏁 Operations on Sequence Ends

    In many practical applications, operations concern only the ends of the sequence (e.g., removeFirst, addLast). On **Arrays**, any "insert" has a pessimistic linear time $\Theta(n)$. Linked lists or Cyclic Arrays are often more efficient for these patterns.

    🔗 Singly vs. Doubly Linked S113

    • Singly: next pointer only. Minimal memory overhead.
    • Doubly: next and prev pointers. Costs twice the memory for links, but allows bi-directional navigation (crucial for InsertionSort).

    🛠️ Splice & Extensions

    • Splice(a,b,t): Moves sublist $(a \dots b)$ after node $t$. **$O(1)$ constant time**, even for large sublists!
    • Size Attribute: Efficiently cached in $O(1)$ by updating on insert/delete.
    • ⚠️ Splice Catch: Maintaining $O(1)$ size is impossible during an inter-list splice if the sublist length is unknown.
    • Attributes: last pointer (enables $O(1)$ pushBack).

    🏗️ Abstract Data Structures (ADS)

    An **ADS** is defined by its interface (supported operations), not by its implementation. Implementation choice (e.g., Array vs. List) determines the time/space complexity of these operations. ADS is opposed to concrete data structures.

    pancake Stack (ADS) ⭐ S125

    A LIFO (Last-In, First-Out) structure defined by its core interface:

    • push(T): Add element at top.
    • pop(): Remove and return top element (modifier).
    • top(): Return top element without removing (non-modifier).

    Applications: Undo operations, function calls (call stack), browser back button, expression parsing.

    🗃️ Deque (ADS) ⭐ S127

    Double Ended Queue (pronounced "deck"). A generalization of both stack and queue.

    • first(), last(): View ends.
    • pushFront(T), pushBack(T): Add to ends.
    • popFront(), popBack(): Remove from ends.

    ⚖️ Linked Lists vs. Arrays

    Linked Lists (+)
    • Fast relative ops ($O(1)$ insert-after)
    • Unbounded size (dynamic)
    Linked Lists (-)
    • Memory overhead for pointers
    • Slow absolute access ($\Theta(n)$)

    Note: Arrays have **bounded** size and fast $O(1)$ absolute access but slow insertions.

    🔄 Cyclic Structures

    • Cyclic List: Last node points to first.
      Doubly invariant: (n.next.prev) == (n.prev.next) == n
    • Cyclic Array: Uses modulo n arithmetic to wrap first/last pointers, keeping them at sequence ends efficiently.
    🗃️

    Abstract Data Types

    L7
    ADT Access Pattern Operations Efficient Implementation
    Stack LIFO (Last In First Out) push(), pop(), top() Array or Linked List - O(1)
    Queue FIFO (First In First Out) inject(), out(), first() Circular array or Linked List - O(1)
    Deque Both ends pushFront/Back(), popFront/Back() Doubly linked list - O(1)
    Dynamic Ordered Set Order-based Dictionary min(), max(), succ(), pred() Balanced BST (AVL/RB) - O(log n)

    Amortized Analysis (Unbounded Array)

    Doubling strategy: When full, create 2× array & copy. Single insert can be O(n), but amortized cost = O(1) per insert.

    🌳

    Priority Queue & Binary Heap S147

    L8 ⭐ COMPULSORY

    Heap Property (Min-Heap)

    For every node: priority(parent) ≤ priority(child). Root has minimum. Height = O(log n).

    Operation Complexity Description
    Insert (sift-up) O(log n) Add at end, bubble up
    DeleteMin (sift-down) O(log n) Remove root, replace with last, sift down
    FindMin O(1) Return root
    BuildHeap O(n) Heapify from n/2 down to 0
    HeapSort O(n log n) BuildHeap + n × deleteMin

    Array Representation

    For node at index i: Parent = ⌊(i-1)/2⌋, Left child = 2i+1, Right child = 2i+2

    📖

    Dictionary & Tree Structures S168

    L9 ⭐ COMPULSORY

    General Operations: Print & Splice S120

    1. Verbal Description:

    print traverses the list. splice moves a range of nodes $[b..e]$ after $p$ by re-linking pointers (constant time).

    print(head):
      curr = head
      while(curr != null):
        print(curr.data); curr = curr.next
    
    splice(p, b, e):  // Move [b..e] after p
      b.prev.next = e.next; e.next.prev = b.prev
      e.next = p.next; b.prev = p
      p.next.prev = e; p.next = b

    Simple (Naive) Implementations

    Variant Search Insert Delete Notes
    Unordered (Array/List) O(n) O(1) O(n) Fast insert, slow discovery
    Ordered Array O(log n) O(n) O(n) Binary search for keys
    Ordered List O(n) O(n) O(n) Sorting does NOT help search
    Direct Addressing O(1) O(1) O(1) Fast but space-inefficient ($O(|U|)$)

    Direct Addressing: Maps keys directly to array indices. Ideal when key universe $U$ is small.

    Structure Search Insert Delete Notes
    Hash Table O(1) avg O(1) avg O(1) avg O(n) worst with collisions
    BSTS184 O(h) O(h) O(h) h = O(n) worst, O(log n) balanced
    AVL Tree S193 O(log n) O(log n) O(log n) Balance factor |h_L - h_R| ≤ 1
    Red-Black Tree O(log n) O(log n) O(log n) Fewer rotations than AVL
    Dynamic Ordered Set (Advanced) O(log n) O(log n) O(log n) Search + Order Ops (Pred/Succ)

    BST: Search & Insert

    1. Verbal Description:

    search follows the tree property (left < node < right). insert finds the correct null leaf position to maintain this property.

    search(node, key):
      if(node == null || node.key == key) return node
      if(key < node.key) return search(node.left, key)
      else return search(node.right, key)
    
    insert(node, z):
      y = null; x = root
      while(x != null):
        y = x
        if(z.key < x.key) x = x.left
        else x = x.right
      z.parent = y
      if(y == null) root = z
      else if(z.key < y.key) y.left = z
      else y.right = z

    BST: Delete

    Handles three cases: no child, one child, or two children (replace with successor).

    delete(T, z):
      if(z.left == null) transplant(T, z, z.right)
      else if(z.right == null) transplant(T, z, z.left)
      else:
        y = minimum(z.right)
        if(y.parent != z):
          transplant(T, y, y.right)
          y.right = z.right; y.right.parent = y
        transplant(T, z, y)
        y.left = z.left; y.left.parent = y

    AVL Balance Factor

    $BF(node) = height(left) - height(right)$. Must be in $\{-1, 0, 1\}$. Rebalance via rotations (LL, RR, LR, RL).

    Tree Rotations & Splay S198

    Rotations (AVL/Splay):

    Left Rotation: Moves right child to parent position. O(1) pointer swaps. Necessary for maintaining balance in AVL and splaying in Splay Trees.

    Splay Trees:

    Self-adjusting BST. splay(x) moves node $x$ to root via zig-zig, zig-zag rotations. Amortized $O(\log n)$ for all operations.

    Binary Tree Properties

    Max nodes at height h: $2^{h+1} - 1$. Height of n nodes: $\lfloor \log_2 n \rfloor$ (complete tree).

    Correctness & Complexity Analysis Examples

    L1-L2, L6

    Example 1: Sum of Array ⭐

    1. Specification:

    Initial condition: Arr is an integer array, len is a natural number.
    Final condition: Result is the sum of all elements: $\sum_{j=0}^{len-1} Arr[j]$.

    sum(Arr, len):
      sum = 0
      i = 0
      while(i < len):
        sum += Arr[i]
        i++
      return sum
    2. Stop Property:

    $i$ starts at 0, increases by 1 each loop, stops when $i \ge len$. Since $len$ is finite, the loop terminates.

    3. Loop Invariant:

    $sum = \sum_{j=0}^{i-1} Arr[j]$. At termination ($i=len$), $sum = \sum_{j=0}^{len-1} Arr[j]$.

    4. Complexity:

    $W(n) = \Theta(n)$ time, $S(n) = \Theta(1)$ space.

    Example 2: Maximum in Array

    1. Specification:

    Initial condition: Arr of length $len > 0$.
    Final condition: Result $x = \max \{Arr[j] \mid 0 \le j < len\}$.

    findMax(Arr, len):
      i = 1
      x = Arr[0]
      while(i < len):
        if (Arr[i] > x) x = Arr[i]
        i++
      return x
    2. Stop Property:

    Finite loop from $i=1$ to $len$. Each step $i$ increases by 1.

    3. Loop Invariant:

    $x = \max \{Arr[j] \mid 0 \le j < i\}$. At termination, $i=len \implies x=\text{global max}$.

    Example 3: Fast Exponentiation (Power)

    Efficiently computes $x^n$ using Divide & Conquer by squaring $x$.

    power(x, n):
      if (n == 0) return 1
      if (n % 2 == 0) 
        return power(x*x, n/2)
      else 
        return x * power(x*x, (n-1)/2)
    1. Specification:

    Pre: $x \in \mathbb{R}$, $n \in \mathbb{N}_0$.
    Post: Result $= x^n$.

    2. Complexity:

    $W(n) = \Theta(\log n)$ multiplications.
    $S(n) = \Theta(\log n)$ recursion stack.

    Amortized Case: Stack with multiPop S135

    1. Operation Description:

    multiPop(k) calls pop() $k$ times or until the stack is empty. While a single call is $O(k)$, the average cost over a sequence is small.

    2. Aggregate Method:

    Total $pop$s $\le$ total $push$es. In $m$ operations, total cost is $O(m) \implies O(1)$ amortized.

    3. Accounting Method:

    Pay 2 for push (1 real + 1 credit). pop (and multiPop) uses stored credits.

    4. Potential Method:

    $\Phi(S) = size(S)$.
    $a_{push} = 1 + (s+1) - s = 2$.
    $a_{multipop} = k + (s-k) - s = 0$.

    Amortized Analysis: Formal Methods S135-142

    1. Aggregate Method:

    Show that for any sequence of $n$ operations, the total time $T(n)$ is bounded. Amortized cost $a = T(n)/n$ (absolute worst-case average).

    2. Accounting Method:

    Assign an amortized cost $\hat{c}_i$ to each operation. If $\hat{c}_i > c_i$ (real cost), store the surplus as credit for that element. Requirement: $\sum_{i=1}^n \hat{c}_i \ge \sum_{i=1}^n c_i$ (the total credit must always be non-negative).

    3. Potential Method:

    Assign a potential function $\Phi_i$ to state $i$, with $\Phi_0 = 0$ and $\Phi_i \ge 0$.

    Amortised cost: $a_i = t_i + \Phi_i - \Phi_{i-1}$ (where $t_i$ is real cost).
    Total: $\sum a_i = \sum t_i + (\Phi_m - \Phi_0)$.
    Since $\Phi_m - \Phi_0 \ge 0$, then $\sum t_i \le \sum a_i$.