Re: Quick, Dirty (Score: 2, Informative) by firstname.lastname@example.org in PostgreSQL goes after MongoDB; benchmarks show PostgreSQL in the lead on 2014-09-26 01:24 (#2SYC) Well yeah, except their primary use case (literally, it's the first use case listed) is BIG DATA. It's fair to say that if you're routinely pushing and parsing terabytes of structured data, you probably can and should take a day or two to get the database optimized, no?No. You're simply stuck in a mindset of high-value databases. Try low-value data, on a large scale, instead... Turn up your syslog logging to the maximum amount of debug, then expand that out to hundreds and hundreds of heavily loaded servers, then log it all to a central system, desperately trying to write that to a database for eventual aggregation and reporting... Consider something like an IDS or other monitoring on high-speed data networks, trying to keep track of data usage, in detail, on those gigabit speed lines all-around the clock.Or just consider the cost of an extra server (with SSDs) versus the cost of hours of a DBA's time... For non-critical data in general, you're going to expand the cluster, rather than spend time and effort to tune things.