Spark is a great technology, for sure. I was hesitant to get into Spark because I have lots of experience writing Hadoop map reduce apps. Then I decided a while back to base all of the machine learning examples in my current book project on Spark and MLlib and I am happy with that decision.
As the article mentioned, IBM certainly did validate the Linux "market." When people would ask me what was great about Linux I used to just say that IBM was investing billions in Linux, and that was an acceptable answer for people.
Curious about what your concerns with Spark were? I work for a company that supports Spark development, but I don't work closely with that project, so my opinion is not sufficiently well-informed, and obviously biased.
As far as I know, virtually any MapReduce job can be rather trivially translated to Spark's .map() and .reduce() operations. And the downsides are: it's model hasn't yet been proven at the largest scale's MapReduce has been used, and possibly the use of Scala (although Java / Python bindings are obviously available). Were there any other major factors in your hesitance?
I didn't have concerns about Spark, rather I already felt comfortable with Hadoop.
Another issue is that I am sort of retired now. I still accept small consulting jobs and do a lot of writing but my technology choices have shifted to fun things like Pharo Smalltalk, Haskell, etc.
As the article mentioned, IBM certainly did validate the Linux "market." When people would ask me what was great about Linux I used to just say that IBM was investing billions in Linux, and that was an acceptable answer for people.