How the algorithm works
Euclid noticed that gcd(a, b) = gcd(b, a mod b). Repeating that swap drives b toward 0 quickly; when b hits 0, the previous a is the GCF.
Why is this the fastest? Each step roughly halves the smaller number (in the worst case — consecutive Fibonacci numbers), so the running time is logarithmic in the inputs.
Extended algorithm (Bézout's identity)
Beyond the GCF, the extended Euclidean algorithm finds integers x and y such that a·x + b·y = gcd(a, b). These coefficients are central to modular inverses and RSA cryptography.
FAQ
How many steps does it take?
For two random n-digit numbers, about 5 × n/4 steps on average. The worst case is consecutive Fibonacci numbers — that's where the algorithm slowed Lamé into proving the upper bound.
Why does it stop at 0?
Because gcd(a, 0) = a — every non-zero integer divides 0 cleanly, so the GCF is just the other operand at that point.
Is this how computers actually compute GCD?
Yes. Most language standard libraries use Euclid's algorithm (or the binary GCD variant for very large numbers, which avoids division).