Office of Science
FAQ
Capabilities

MSC Benchmark Revision 2.0

The MSC Benchmark Revision 2.0 (released February 2012) is designed to both examine many specific performance characteristics of proposed systems as well as run applications of interest to EMSL. Many of the micro-benchmarks represent many of the component operations in applications of interest. Results of these micro-benchmarks will also be used as input to a scaling model to predict and test scalability of one or more of the applications of interest.

The benchmarks have been designed to have reasonable run-times. They are to be run at various system sizes up to the maximum system size available on a system that comprises technology as close as possible to the intended systems. It is expected that all program output would be returned so that EMSL staff can review it. Any specific optimizations used, in terms of compiler flags and source code modifications should be recorded and also provided.

Each benchmark includes a README file that includes instructions on the configuration of the benchmark to be run. Any changes made to source for optimization purposes, must produce valid scientific results and be of production quality. Such changes are allowed for the benchmarks submitted for an RFP; further changes to code will NOT be allowed between award and acceptance.

All benchmark results along with original source files, any source modifications and output are required. The actual source code used along with scripts to compile and run the benchmarks are also required. Submitted source shall be in a form that can readily be compiled on the system. The submission shall not contain executables, object files, core files or other large binary data files.

Downloads:

Micro Benchmarks

Application Benchmarks