Skip to content
Snippets Groups Projects
Commit 9ea862a3 authored by Andrea Gussoni's avatar Andrea Gussoni
Browse files

Corrections and additions to instructions

parent 6641bd03
Branches master
No related tags found
No related merge requests found
......@@ -2,9 +2,11 @@
### rev.ng installation
You will need a copy of the *rev.ng* framework. To get *rev.ng* up an running you'll need `orchestra`, a tool which automatically compiles all the software which composes *rev.ng*, and which also manages the runs of the SPECint 2006 benchmarks.
You will need a copy of the *rev.ng* framework. To get *rev.ng* up an running you'll need *orchestra*, a tool which automatically compiles all the software which composes *rev.ng*, and which also manages the runs of the SPECint 2006 benchmarks.
You can retrieve the `orchestra` repository at the url https://rev.ng/gitlab/revng-bar-2019/orchestra.git
You can retrieve the *orchestra* repository at https://rev.ng/gitlab/revng-bar-2019/orchestra.git
Please keep in mind that this version of *orchestra* is the one that has been used for producing the results presented in the paper. Since then, some heavy changes have been made to *orchestra*, so if you are currently using a more recent version of *orchestra* consider that these instructions may not be valid anymore.
You'll also need a copy of the [SPECint 2006 benchmarks](https://www.spec.org/cpu2006/).
......@@ -34,17 +36,17 @@ apt-get install --no-install-recommends \
zlib1g-dev
```
Optionally, we can also install `git-lfs`:
You'll also need to install `git-lfs`:
```
curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | bash
apt-get install --no-install-recommends git-lfs
```
Now you can proceed with the cloning of the `orchestra` repository:
Now you can proceed with the cloning of the *orchestra* repository:
```
git clone git@rev.ng:revng-bar-2019/orchestra.git
git clone https://rev.ng/gitlab/revng-bar-2019/orchestra.git
```
Then you'll need to build *rev.ng*. To do this, first move into the `orchestra` folder, and checkout the branch which contains all the tools and configurations for running the SPEC benchmarks:
......@@ -59,28 +61,28 @@ Now you'll need to export some environment variables, which are used to prepare
export BINARY_COMPONENTS=""
```
This tells `orchestra` to compile all the needed components from scratch.
This tells *orchestra* to compile all the needed components from scratch.
```
export QEMU_DEFAULT_BUILD="qemu-release"
```
This tells `orchestra` to use the optimized version of QEMU during the translation of the binaries.
This tells *orchestra* to use the optimized version of QEMU during the translation of the binaries.
```
export LIBC_DEFAULT_CONFIG="gc-o2"
```
This tells `orchestra` to compile the `libc` library using optimizations.
This tells *orchestra* to compile the `libc` library using optimizations.
```
export DEFAULT_TOOLCHAINS="x86-64"
```
This tells `orchestra` to compile only the toolchains for the `x86-64` architecture, without building all the other toolchains.
This tells *orchestra* to compile only the toolchains for the `x86-64` architecture, without building all the other toolchains.
At this point, you can build *rev.ng* using `orchestra` with the command:
At this point, you can build *rev.ng* using *orchestra* with the command:
```
make install-revamb
......@@ -92,35 +94,38 @@ For what concerns the SPECint benchmark installation, you can provide, with the
Alternatively you can directly place the archive in the `source-archives` folder inside the `orchestra` folder.
The archive structure should be made in such way a that, at the root level, there is a folder called `spec2006_int_fp/`, which in turn contains these files and directories (the content of the SPEC DVD):
```
benchspec
bin
config
cshrc
Docs
Docs.txt
install_archives
install.bat
install.sh
LICENSE
LICENSE.txt
MANIFEST
README
README.txt
redistributable_sources
result
Revisions
shrc
shrc.bat
tools
uninstall.sh
version.txt
The archive structure should be made in such way a that, at the root level, there is a folder called `spec2006_int_fp/`, containing the content of the SPEC DVD. Basically, the archive structure should be like this:
```
user@machine:$ tar taf spec2006_int_fp.tar.gz
spec2006_int_fp/benchspec
spec2006_int_fp/bin
spec2006_int_fp/config
spec2006_int_fp/cshrc
spec2006_int_fp/Docs
spec2006_int_fp/Docs.txt
spec2006_int_fp/install_archives
spec2006_int_fp/install.bat
spec2006_int_fp/install.sh
spec2006_int_fp/LICENSE
spec2006_int_fp/LICENSE.txt
spec2006_int_fp/MANIFEST
spec2006_int_fp/README
spec2006_int_fp/README.txt
spec2006_int_fp/redistributable_sources
spec2006_int_fp/result
spec2006_int_fp/Revisions
spec2006_int_fp/shrc
spec2006_int_fp/shrc.bat
spec2006_int_fp/tools
spec2006_int_fp/uninstall.sh
spec2006_int_fp/version.txt
...
```
### Running the benchmarks
At this point, we are ready to run the benchmarks. `orchestra` already provides various targets which are designed for this purpose.
At this point, we are ready to run the benchmarks. *orchestra* already provides various targets which are designed for this purpose.
For example, the command:
```
......@@ -137,7 +142,7 @@ make toolchain/x86-64/spec/native/use-gc-o2-static
The results of the run will be stored inside the folder `build/toolchain/x86-64/spec/native/use-gc-o2-static/result` (the path is relative to the `orchestra` folder) in an uncompressed form. A bundle of the results obtained will also be saved in `root/x86_64-gentoo-linux-musl`.
By default, the benchmarks will execute three runs of the `ref` size of the benchmarks (which have been used to generate the data included in the paper). You can customize the SPEC invocation using the `SPEC_FLAGS` environment variable of `orchestra` (the default value is `--iterations=3 --loose --size ref int`)
By default, the benchmarks will execute three runs of the `ref` size of the benchmarks (which have been used to generate the data included in the paper). You can customize the SPEC invocation using the `SPEC_FLAGS` environment variable of *orchestra* (the default value is `--iterations=3 --loose --size ref int`)
The targets seen above are the ones used for benchmarking the native binaries, and there are other 3 couples of targets for building and running the benchmarks, which are the targets for QEMU, binaries translated with *rev.ng*, and binaries translated and to which function isolation has been applied.
......@@ -168,13 +173,13 @@ make clean-toolchain/x86-64/spec/revambisolated/use-gc-o2-static
remove the benchmarks of the binaries translated with function isolation.
### Configure and build with orchestra
### Configure and build with *orchestra*
`orchestra` provides two useful targets to inspect the available targets and configurations. The first target is `make help-components`, which prints all the targets that `orchestra` can build.
*orchestra* provides two useful targets to inspect the available targets and configurations. The first target is `make help-components`, which prints all the targets that *orchestra* can build.
The `make help-variables` command instead, prints all the environment variables that can be configured and which will affect the building of the components (such as the environment used above to build QEMU in release mode instead of debug mode).
### SPEC integration in orchestra
### SPEC integration in *orchestra*
The integration of the SPEC benchmarks with the *rev.ng* framework is done by the `revcc` script (which you can find in the `revamb/scripts` folder), which is in charge of invoking the compilation, lifting, function isolation and recompilation of the binaries that we want to translate using *rev.ng*.
......
To produce the results used in the paper, we computed, for each category (`native`, `qemu`, `revamb`, `revambisolated`), for each benchmark the average over the 3 runs of the execution time.
In the `raw-results` folder we provide the results as they are provided by SPEC, and in the `aggregated-paper-results` you can find, for convenience, the averages of the runs times for each category and benchmark.
Please consider that in the logs of the benchmarks runs, there may be present some warnings saying that the runs where not *reportable*.
This is because we tested the `test` and `train` SPEC workloads separately and verified that they passed, and for timing reasons we collected only the results produced by 3 runs of the `ref` workload. You can produce a reportable run by adjusting the `SPEC_FLAGS` environment variable in *orchestra*.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment