Though sleep sort is obnoxious (& hilarious), I'm sure it could be of use in educational situations. It's both elegant and intuitive. Thought provoking, maybe?
In reality sleep sort is not actually doing the sorting, it's just asking the OS to do it using a traditional sorting algorithm, because the OS will have to sort all the sleep timers to determine the order in which to wake up the threads.
And OSs either use a simple list for the timer queue (which means n^2 overall complexity), or something based on a heap (like a binary tree), which ultimately becomes O(n lg n).
> because the OS will have to sort all the sleep timers
Not necessarily (though yes, in any efficient real life it would). It could wait for a tick of the clock and on each scan an unordered list of things to see if any are due to happen.
Of course if the list gets big enough that the list takes a long time to scan then it will break sleep sort because if enough time passes for two or more events to have happened between two checks, those events will happen in an undefined order.
I don't think that fixes it. The closest you could come is to have n physical timers to sort n elements, but then your "sorting algorithm" can't sort more than n elements. As soon as you had more elements than timers you would still need some actual sorting algorithm to determine priority.
It's kind of like claiming you have an O(n) sorting algorithm because you can run an O(n) selection algorithm to find the kth smallest element for each of 0..n-1 and then run n of them in parallel on n cores to get an O(n) sorting algorithm. If you actually have n cores then the wall clock time for executing that algorithm will be O(n), but the number of operations won't be and it all falls apart when you run out of hardware parallelism.
It reminds of me of bucket sort. There is an intellectually curious implementation to sort a list of unique fixed-width (e.g. 32-bit) integers that goes like this: You allocate an m bit (e.g. 2^32 bit / 512MByte) array and zero it (O(m)), then you inspect every element and set that bit of the array (O(n)), then you traverse the array and add an element to the end of the sorted list for every set bit (O(m)). So the overall algorithm is actually O(1) in the number of elements, because you're doing m+n+m work where m > n, which means you're doing at most m+m+m. The problem is that in practice for most real lists the constant factor is so much larger than n that using an O(n log n) sorting algorithm will be significantly faster. And it's clearly Do Not Use if you're sorting e.g. 128-bit ints. But for n sufficiently close to m it can actually be useful. It would probably be tough to beat for sorting a 3 billion element array of 32-bit ints. Or a 200 element array of 8-bit ints.
No, the OS still needs to maintain a sorted list, and that list likely has either a single thread that maintains it or some locking mechanism. Whenever the OS is looking for something to run it will inspect the head of that list to see if the current time >= the time at which that thread should be woken up. And if it can't find anything there then it will check the 'runnable' queue (processes that are CPU bound) and if there is nothing there then it will default to some idle loop.
> I'm sure it could be of use in educational situations.
It is a good example of how algorithm design can be done naively and algorithm analysis can be done badly, so definitely has at least some educational value.
Excluding implementation specific edge cases that would break the underlying assumptions regarding thread wakeup times, it is a O(N) time complexity sort in all (best, worst, average) cases. A miracle! Get thee to the patent office! Eat my shorts Shell!
Of course the constant time multiplier involved is so massive that it would not do better than an n(log n) or even an n*n sort in any practical situation, and this is before considering space requirements and thread processing overhead.
Unless your thread scheduler can fairly choose from linearly many threads in constant time with 0 cost context switching and your threads take 0 time to call sleep(), at large input sizes, you eventually have to make the base sleep time longer, so it's not really linear time complexity.
Mainly for the purposes of making my earlier statement correct, I consider that to be an implementation specific problem which we do not need to consider in theoretical discussions!
The OS needs to pick which thread to run from a pool of N threads - absolute best case an O(log N) operation, either when the thread is queued or when it's being selected. This happens N times, so the runtime seems like it would actually scale as O(N log N).
On the other hand, if you've got a scheduler that can pick the thread with the smallest sleep time from a list in O(1) time, you could just use whatever algorithm that's using to make an O(N) insertion sort...
But if the time multiplier used is high enough the time spend choosing a thread at each period is small.
So while that overhead could increase processing time, wall-clock time need to be affected.
Again this is why I think it makes a good example for illustrating critical thinking in the area of algorithm design and analysis. It is an obviously silly example which could actually work, within certain practical boundaries, and the many obvious problems with it are hidden in more complex processes/structures.
Hilarious, yes, elegant, no. There's no guarantee that the list will actually come out sorted (unless you dig through the implementation of sleep() down to the kernel and confirm that the order is always deterministic). It's funny precisely because it has nothing at all going for it, and yet almost kind of works.
Though sleep sort is obnoxious (& hilarious), I'm sure it could be of use in educational situations. It's both elegant and intuitive. Thought provoking, maybe?