When I think back to my very first client experience, I realize that life was so easy benchmarking on a single mainframe with a customized cards deck that was profiling most of their production batch jobs behavior.
At that time, workloads were well known and slow to evolve, computing models were limited and same benchmarking process was relevant year after year.
Nowadays the various cloud models, sourcing types and multiple offerings require a different benchmark approach to be fully relevant to cloud environments.
From batch to cloud…
Batch jobs are steady workloads, predictive models or projections apply very well and benchmarking on pure systems performance criteria are relevant (like job elapse time and throughput). In contrast, the cloud model brings new needs because of the service model abstraction and requires new initiatives to develop new cloud benchmarks.
As a leading benchmark organization, the Standard Performance Evaluation Corporation (SPEC) organization has recently created an Open Systems Group (OSG) Cloud computing Working Group to investigate this need to monitor and measure the performance of cloud systems, and IBM has taken an active role in this ongoing initiative that has already delivered a very first public report: Report on Cloud Computing to the OSG Steering Committee.
Why is cloud a benchmark game changer?
Cloud cannot be measured only through previous old batch or even transactional key performance indicators (KPIs) like the Throughput or Response Time indicators. New metrics are required to evaluate their unique characteristics, like this non-exhaustive list shows:
- How quickly a service can adapt to changing customer needs
- Provisioning response time
- Time needed to bring up or drop a resource
- Scale up/down
- Ability to maintain a consistent unit completion time when solving increasingly larger problems only by adding a proportional amount of storage and computational resources
- How repeatable is the test result, depending upon any configurations or background load change on the systems under test.
- Ability to scale the workload and the ability of a system provisioned to be as close to the needs of the workload as possible.
IBM has already worked on this topic and acquired expertise in this new space.
Sharing Cloud Meta-Benchmark by IBM (CloudBench)
Through the previous SPEC OSG report reference, we have shared information about our IBM’s CloudBench meta-benchmark framework designed for infrastructure as a service (IaaS) clouds.
It automates the execution, provisioning, data collection, management and other steps within an arbitrary number and types of individual benchmarks.
CloudBench covers the following functions:
- Exercise the provisioned VMs by submitting requests to applications (individual benchmarks) that run on the provisioned VMs.
- Supports Black Box1 testing, with some support to embed data collection nodes inside the system under test (SUT) to collect metrics usually associated with White Box2 tests.
- Exercise the operational infrastructure by submitting VM provision/de-provision requests to the cloud management platform.
- Manages multiple application sets. The default workload generates various types of workloads, but can be extended to support local custom application sets.
- Measures elasticity components: provisioning time, scaleup, as well as variability, agility.
The following figure displays a usual execution flow for CloudBench.
CloudBench’s meta-benchmark execution flow.
This IBM meta-benchmark framework has already demonstrated that we have strong foundations to answer to “new metrics” need in the Cloud.
This is a first but significant step to move cloud benchmarks forward, leaving far behind the old card deck style to master the cloud computing new benchmarking needs…