The initial motivation is to run benchmarks, though the foundation is flexible and can support many other use cases over time.
It's already proving useful. For example, I can run a benchmark, view the results in a dashboard, and even feed the report into Claude Code to answer questions like:
"How did changing X affect the results?" or "What could be improved in the next run?"
For user, it could be useful to separate what from the how.
For example, we could have a function that launches effects to manipulate the file system, and for test/mock purpose we could catch this effects with handlers that mock the file system.
I know that the pattern dependency injection solve the same problem, but with effects you could do it with a more natural code.
The initial motivation is to run benchmarks, though the foundation is flexible and can support many other use cases over time.
It's already proving useful. For example, I can run a benchmark, view the results in a dashboard, and even feed the report into Claude Code to answer questions like: "How did changing X affect the results?" or "What could be improved in the next run?"