You can use Hadoop to scale to 100TB using Commodity PCs and still be manageable, much easier and cheaper than you can use Oracle to do same. The featureset could easily DWARF those available using Oracle, as you can mapreduce all the data using your own hadoop jobs, and it can hold whatever kind of data you want it to - you're not limited to a static schema and precomputed summaries. HIVE is running on commodity PCs at facebook on more than 100TB, and they prefer it to their enormous, overly expensive Oracle OLAP system. So there's your answer, for one use case.
But of course, if you're updating the data often - you wouldn't use hadoop (or if you were, you would run HBase or some such on top of it). But there are many use cases where you only write once at that scale. And in those cases, from my perspective - its much nicer to scale on commodity hardware than on big iron.
But of course, if you're updating the data often - you wouldn't use hadoop (or if you were, you would run HBase or some such on top of it). But there are many use cases where you only write once at that scale. And in those cases, from my perspective - its much nicer to scale on commodity hardware than on big iron.