Sample ESESC simulation run

by Ehsan K. Ardestani and Jose Renau - tags:

Once you have compiled ESESC, you have a working crafty run. This blog entry shows you how to use launcher to execute more benchmarks.

Launcher Binary

ESESC can run programs compiled for ARM without any modifications. You can compile your program, and ask ESESC to run it by pointing to the benchmark binary in esesc.conf:

benchName  = "myProgram  myArguments"

Nonetheless, we have facilitated the process of running the common benchmarks (for example applications from SPEC or PARSEC benchmark suite). ESESC is provided with an application called launcher. The launcher app provides a single point of entry to specify the benchmark:

benchname = "launcher -- stdin crafty.in --crafty"

will launch crafty. The following would run crafty and gcc (SPEC 2006) together:

benchname = "launcher -- stdin crafty.in --crafty -- gcc_06  166.i -o 166.s"

So we can simply run different benchmarks and mixture of them. Only one application with standard input can exist in the mixture each time. The standard input is always the first argument to specify, if applicable. Also in this case, we have to update esesc.conf to specify a dual core configuration.

Single core:

cpuemul[0]  = 'QEMUSectionCPU' 
cpusimu[0]  = "$(coreType)"

Dual core:

cpuemul[0:1]  = 'QEMUSectionCPU' 
cpusimu[0:1]  = "$(coreType)"

Set the directory

You should do the following steps:

Compile ESESC, and remember the ESESC binary location (E.g: ~/build/esesc-release/main/esesc)

Create a running directory

mkdir -p ~/build/esesc-release/run
cd ~/build/esesc-release/run

Get the input sets. We can not provide the SPEC input sets due to copyright. The PARSEC input sets are available at the following location:

wget http://masc.soe.ucsc.edu/esesc/resources/mtInputs.tar.bz2
tar xjvf mtInputs.tar.bz2

To save disk space, it may be good to share the input sets across all the esesc runs. For example create a common ~/projs/benchmarks directory, and expand the mtInputs there.

Create the simulation scripts

The configuration is created with the mt-scripts.rb provided by ESESC under the conf directory.

In this sample setup, we assume that you have cloned the ESESC repository in the ~/projs/esesc directory.

For benchmarks, you can specify

single_core
dual_core
quad_core
octo_core
parsec_dual_core
parsec_quad_core
parsec_octo_core
parsec_16_core
mix_octo_core

You also need to specify the esesc binary location, the sampling mode used, your email, the input set location, the configuration file, and the launcher location. A valid sample:

Start by making sure that you have a local esesc.conf configuration

cd ~/build/esesc-release/run
cp ~/projs/esesc/conf/*conf .

Test that everything is ready before building the scripts

mkdir exe
cp ~/projs/esesc/bins/crafty* exe
../main/esesc <exe/cracfty.in

If the default test simulation finished (less than 10 minutes of rrrr and rwrw), create the scripts to run esesc

~/projs/esesc/conf/scripts/mt-scripts.rb -b single_core \
--local -x ../main/esesc -m smartsmode -e foo@ucsc.edu \
-j ./input \
-c ./esesc.conf -u ~/projs/esesc/conf/launcher

or this other option if you use ~/projs/benchmarks for the input sets.

~/projs/esesc/conf/scripts/mt-scripts.rb -b single_core \
--local -x ../main/esesc -m smartsmode -e foo@ucsc.edu \
-j ~/projs/benchmark/input \
-c ./esesc.conf -u ~/projs/esesc/conf/launcher

The result of running the previous mt-script is a number of scripts to run each benchmark. For example:

leslie3d_smartsmode-run.sh
mcf_smartsmode-run.sh

Sample run

For example, you can run mcf by (WARNING!! the mtInputs DOES NOT include SPEC input sets because of copyright reasons. You should acquire a SPEC license):

./mcf_smartsmode-run.sh

This creates a simulation inside mcf_smartsmode directory. Once the simulation is finished, check the results with

cd mcf_smartsmode
~/projs/esesc/conf/scripts/report.pl -a

Sample ESESC compilation setup

by Ehsan K. Ardestani and Jose Renau - tags:

This blog entry is shows the easiest way to get ESESC running using the crafty benchmark from SPEC CPU 2000 (binary and modified input set included).

Compilation

In this sample setup, we assume that you have cloned the ESESC repository in the ~/projs/esesc directory.

So you can check the source code here:

cd ~/projs/esesc

Make sure that you have the required packages installed in your machine. Read the README.install for the main required packages.

ESESC can be build in Debug or Release mode. This sample is the Debug mode which is slower but produces many meaningful warnings. We strongly suggest to use the DEBUG mode while developing or trying to understand ESESC.

mkdir -p ~/build/esesc-debug
cd ~/build/esesc-debug
cmake -DCMAKE_BUILD_TYPE=Debug ~/projs/esesc
make 

Once you are ready to do some simulations (something more than 30 minutes with DEBUG), you should try to compile ESESC with a Release target (this is the default option). This is much faster but if there is any problem during simulation it will just crash without warnings. Use Debug if you see a crash and solve the problem before using Release again.

mkdir -p ~/build/esesc-release
cd ~/build/esesc-release
cmake ~/projs/esesc
make 

As you may noticed, ESESC takes a bit to compile (not as much as the Linux kernel). As usual, you can speed up the compilation with a multicore (-j4 spawns 4 compilation processes):

make esesc -j4

To run ESESC, it is recommended to create a run directory. Then copy the configuration files in esesc/conf to the run directory along with the benchmark executable. For example to run in Release mode (which is much faster than Debug mode):

cd ~/build/esesc-release
mkdir -p run/exe
cd run
cp ~/projs/esesc/bins/crafty* .
cp ~/projs/esesc/conf/*.conf .

You can run esesc with the default parameters:

../main/esesc <crafty.in

Once the simulation is finished, check the results with

~/projs/esesc/conf/scripts/report.pl -a

You can check esesc.conf to see what benchmark is being simulated (by default it runs crafty from SPEC benchmark suite), and what sampling parameters are used. shared.conf contains the architectural configuration for each core/component in the system. More on running different applications coming soon.

Sampling in Thermal Simulation of Processors: Measurement, Characterization, and Evaluation

by Ehsan K. Ardestani - tags:

The paper Sampling in Thermal Simulation of Processors: Measurement, Characterization, and Evaluation by Ehsan K.Ardestani, Francisco J. Mesa-Martinez, Gabriel Southern, Elnaz Ebrahimi, and Jose Renau will appear in the IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD). It gives an overview of the role of sampling in thermal evaluation of processors. ESESC provides the simulation infrastructure for this paper.

Abstract

Power densities in modern processors induce thermal issues which limit performance. Power and thermal models add complexity to architectural simulators, limiting the depth of analysis. Prohibitive execution time overheads may be circumvented using sampling techniques. While these approaches work well when characterizing processor performance, they introduce new challenges when applied to the thermal domain. This work aims to improve the accuracy and performance of sampled thermal simulation at the architectural level.

To the best of our knowledge, this paper is the first to evaluate the impact of statistical sampling on thermal metrics through direct temperature measurements performed at run time. Experiments confirm that sampling can accurately estimate certain thermal metrics. However, extra consideration needs to be taken into account to preserve the accuracy of temperature estimation in a sampled simulation. Mainly because, on average, thermal phases are much longer than performance phases. Based on these insights, we introduce a framework that extends statistical sampling techniques, used at the performance and power stages, to the thermal domain. The resulting technique yields an integrated performance, power, and temperature simulator that maintains accuracy while reducing simulation time by orders of magnitude. In particular, this work shows how dynamic frequency and voltage adaptations can be evaluated in a statistically sampled simulation. We conclude by showing how the increased simulation speed benefits architects in the exploration of the design space.

← older newer → archive