-
Notifications
You must be signed in to change notification settings - Fork 15
ThreeCommonTasks
# SteerSim and SteerBench: Three Common Tasks
There are three tasks that you are very likely to encounter when using SteerSuite: (1) testing steering AI using test cases, (2) recording and replaying those simulations, and (3) benchmarking recordings. This chapter explains how to perform these three tasks.
Simulating a Test Case using SteerSim
Test cases define a variety of challenging scenarios for agent steering, and are very useful for developing, debugging, evaluating your own steering algorithms. The basic command line to simulate a test case using a specific steering AI module is:
./steersim -testcase <test-case-path> -ai <moduleName> <test-case-path>
refers to any SteerSuite XML test case. There are many test cases already provided with SteerSuite in the testcases/ directory, ranging from simple single-agent scenarios to large-scale extremely challenging crowd scenarios. <moduleName> refers to any steersim module that provides steering AI. SteerSuite provides some steering modules you can experiment with: dummyAI does absolutely nothing, which is useful for seeing the initial conditions of a test case. simpleAI is a very basic demo of agents that ignore other agents and obstacles. pprAI (will be available soon) is a more complete implementation of a steering agent, which users can use as a starting point for their own experiments and implementations.
Internally, the
-testcase
and
-ai
options indirectly tell the engine to load the
testCasePlayer
module. The engine recognizes these options and knows to pass the
<test-case-path>
and
<moduleName>
as options to the
testCasePlayer
module. This is exactly equivalent to explicitly specifying the module
and its options directly on the command line:
./steersim -module testCasePlayer,testcase=\<test-case-path\>,ai=\<modulename\>
By specifying the module and its options directly, it is possible to specify more options to the testCasePlayer module. You can also specify these same options from an XML configuration file (see SterSuiteConfigFile). To see all options that can be specified to the testCasePlayer module, refer to the Reference Manual.
It is common to make the following mistake:
steersim -testcase ../../testcases/simple-1.xml -module simpleAI \# This command will NOT use the simpleAI module!
This tells steersim to load the testCasePlayer and simpleAI modules, using the simple-1.xml . However, this does not tell the testCasePlayer module to use the simpleAI module for the AI. The correct command line should use the -ai option to specify the AI module:
steersim -testcase ../../testcases/simple-1.xml -ai simpleAI
Eventually you may want to test your own steering algorithm on the SteerSuite test cases. To do this, there are two main options: (1) use SteerLib to read test cases into your own steering code, or (2) develop your steering AI as a plugin for SteerSim. Refer to Section 4.3, “Reading a Test Case” and Section 5.3, “Creating a SteerSim Plugin” for more information.
The
steersim
simulations can also be recorded to file. These recordings, of course,
can be then be replayed at a later time. We found it useful to nickname
these recordings "
rec files
." Rec files are intended to be an easy way to archive and share
results, even when someone does not want to share their source code.
To make a recording, use the
-storesimulation
option as follows:
./steersim -testcase \<test-case-path\> -storesimulation simple.rec
This command stores the AI simulation into the file simple.rec . The .rec extension is not required, but it is the preferred extension for rec files. To replay the rec file, use the -replay option. This option cannot be used at the same time as the -testcase option.
./steersim -replay simple.rec
While replaying a simulation, you can use the left and right arrow keys to shuttle back and forth along the timeline of the simulation. If the simulation plays too fast or slow, you can also use + (without pressing shift) or
keys to speed up or slow down the simulation, respectively. Specifying the Modules Directly Similar to the test case options described above, the -storesimulation and -replay are actually shortcuts for loading and specifying options to the appropriate modules. It is possible to specify more options by explicitly loading the simulationRecorder and recFilePlayer modules. For example to store a simulation:
./steersim -testcase \<test-case-path\> -module simulationRecorder,file=simple.rec
and to replay a simulation:
./steersim -module recFilePlayer,file=simple.rec
When an existing recording is being replayed, it actually creates agents that can be manipulated and analyzed just like other steering AI modules. Look carefully at the following example:
./steersim -module recFilePlayer,file=simple.rec,retime=1.5 -storesimulation simple-fast.rec -commandline -numframes 60
This example simulates 60 frames, and each frame of the simulation it advances 1.5 frames in the playback. This means it will replay 90 frames of simple.rec , 50 percent faster. It then stores 60 frames of simulation into simple-fast.rec . In other words, this example essentially transcodes a recording into a faster recording.
One of the goals of SteerSuite is to help users evaluate and benchmark steering behaviors. In addition to visually inspecting the results of many test cases, users can see benchmark scores of their simulations. Recordings can be benchmarked using the steerbench utility. steerbench , in turn, uses functionality provided by SteerLib, which includes computing many metrics of the simulation, and several benchmark techniques that compute a simple score. To benchmark a recording, steerbench can be invoked from a command line as follows:
./steerbench simple.rec
This command will run the default benchmark technique on simple.rec . A specific benchmark technique can be chosen using the -technique option. The following example benchmarks the agents recorded in simple.rec , based on the composite01 scoring technique:
./steerbench -technique composite01 simple.rec
The composite01 technique was the original benchmark technique created for SteerBench. See the Reference Manual for a complete list of options for steerbench and for in-depth explanations of the various metrics and benchmark techniques. Interpreting Benchmark Scores
**Important**
Scores from different benchmark techniques usually have very different interpretations. Sometimes a single benchmark score does not have an interpretation by itself, it may need to be compared to other scores. The safest interpretation is to compare scores only if they use the same benchmark technique and the same initial conditions (i.e., the same test case). Other ways of interpreting the score, for example comparing scores across different test cases or interpreting a single score by itself, may or may not be valid depending on the benchmark technique.
SteerBench can also be used while a simulation is running in steersim , using the steerBench module. The following command tests the simpleAI steering agents on the curves.xml test case, and computing a benchmark score at the same time using the composite01 benchmark technique:
./steersim -testcase curves.xml -ai simpleAI -module steerBench,technique=composite01
Online benchmarking can be useful to avoid the extra step of recording the simulation.