Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Tangential question: How can an algorithm end up with a fractional exponent? It's clear to me how nested loops can give O(n^2) or O(n^2), and how a binary search can give O(log n), but how on earth can you get something like O(n^2.371552), or even O(n^2.7)?


The classic example here is Strassen's algorithm. You divide your two input matrices into quarters, and then you need to do 7 matrix multiplies of the smaller (n/2) matrices. When you add up the total work done, this ends up needing to do n^(log base 2 of 7) work, or about n^2.8.


Many divide and conquer algorithms are expressed in the form O(n^logb(a)) because of the master theorem https://en.m.wikipedia.org/wiki/Master_theorem_(analysis_of_...


I think a simpler algorithm like https://en.wikipedia.org/wiki/Karatsuba_algorithm demonstrates how it can happen.


It's pretty easy to get to O(N^(3/2)) at least, since that is sqrt(N)^3

Imagine an algorithm that re-arranges a list into a sqrt(N) by sqrt(N) grid, and does O(num_columns^2) work for each row.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: