This is the introduction to a 5-part series on DW/Analytics Platform Performance analysis and its use in platform and application benchmarking.
Benchmarks have long been used by vendors to highlight their products’ performance or price-performance advantages. And they are used by customer’s development and procurement teams to winnow down the field of potential solutions and often to drive a “bake-off” and/or proof-of-concept (POC) evaluation prior to adoption.
In the earlier stages of the process, published results for “standard” benchmarks are often used to “qualify in” various offerings. In the later stages, both standard benchmarks and custom benchmarks (customer or application specific) are common as part of the evaluation and even product acceptance.
But there are also critical, often-overlooked needs and opportunities for continual benchmarking in today’s DW/Analytics environments, especially as companies grapple with things like aging and over-committed internal infrastructure, the opportunities afforded by the evolution of the Cloud, or changes in analytics approaches and priorities inspired by new analytics initiatives (ML) or M&A activity.
Through the next 5 articles (with some relevant excursions), we’ll explore the current state of benchmarking in the DW/Analytics space and present a methodology that can be used to continually refine the effectiveness of existing and new deployments within an organization.
- Part 1. Background: Lies, Damned Lies, and Benchmarks
- Part 2. A Picture is Worth 1000 Words: A visual, whole-workload approach to benchmarking
- excursion 1: A Picture is Worth 1000 Words REDUX
- excursion 2: Schema Matters: Data Distribution and Ordering
- Part 3. Scale Matters (1): Concurrency scaling and the effect on query response times and response time variability.
- Part 4. Scale Matters (2): Scalability of datasets (beyond all-in-memory) and effective use of dynamic resource scaling.
- Part 5. The Ice Under the Water: Data operations and maintenance considerations.