Thanks for commenting! Actually in this case, "the work being done" can be really fast because it can be done asynchronously. For context, here’s how this translates in a real-world application.
The original algorithm was provided by DeepSeek, and our optimized implementation achieves a 92× speedup over it. The 5x number is comparing with another baseline that is undisclosed yet.
When integrating EPLB into vLLM, I discovered—somewhat unexpectedly—that the open-source algorithm consumes nearly half of the total time of a rearrangement step, with the remaining time spent transferring weights across GPUs. To address this, I applied OpenEvolve to the algorithm, setting the primary objective to improve speed while maintaining the same balance factor. It performed remarkably well. With additional optimizations on the weight transferring, the overall overhead has now become almost negligible.
While no one will deny you (or I guess your system) the immense satisfaction of 100x improvement on a given step, I think it would be helpful to note the frequency of this rebalancing step, and to contextualize your result in terms of the runtime (or throughput) of the workload(s) you were using to evaluate.
e: also comparison a fixed (nothing faster than 0!) and random policy might be informative if your intent is to publish this as improvement for the object problem, not just a demonstration of ARDS.
It's okay to acknowledge that you missed something in your literature search. Everyone does. It's not okay to sweep it under the rug or pretend that it's novel after having having the prior work pointed out to you, especially when a central part of your thesis is that "AI" discovered a novel algorithm and it's very likely that this algorithm was part of the LLM's training data.
Thanks! In realistic workloads, the differences won’t be orders of magnitude.
I agree that this is a fairly simple problem. Experienced engineers—or anyone who has faced similar challenges—can quickly come up with such solutions. The key point, however, is that others might get stuck in their research simply because they don’t realize these quick solutions exist (“I don’t know what I don’t know”). AI helps bridge that gap by making expert-level knowledge accessible to every researcher, allowing them to focus more on exploring the truly unknown parts.
Except that "AI" steals and mostly does not do citations.
EDIT: The chutzpah of downvoting this is striking. The paper says "surpasses highly optimized algorithms engineered by human experts to achieve a 5.0x speedup" and https://news.ycombinator.com/item?id=45689663 links to a 2024 paper where humans discovered a 4.2x speedup using a snake pattern. The 2024 paper is not cited.
The original algorithm was provided by DeepSeek, and our optimized implementation achieves a 92× speedup over it. The 5x number is comparing with another baseline that is undisclosed yet.
When integrating EPLB into vLLM, I discovered—somewhat unexpectedly—that the open-source algorithm consumes nearly half of the total time of a rearrangement step, with the remaining time spent transferring weights across GPUs. To address this, I applied OpenEvolve to the algorithm, setting the primary objective to improve speed while maintaining the same balance factor. It performed remarkably well. With additional optimizations on the weight transferring, the overall overhead has now become almost negligible.