TAU 2011 Power Grid Simulation Contest

[Announcements] [Contest Benchmarks and Results] [Contact Information] [Schedule] [Submission Information] [Evaluation and Ranking] [Utility Scripts] [Frequently Asked Questions] [Benchmark Format] [Participating Teams] [Call for Participation]


Contest Benchmarks and Results

  1. Test benchmarks: ibmpg1 to ibmpg6 from this site. Two new benchmarks will be added soon (only these two benchmarks will be released before contest).
  2. Five new benchmarks will be released soon.
  3. TAU 2011 Contest Presentation with Results.

Contact Information


Jan 12, 2011 New contest website is up with instructions.
Jan 25, 2011 Release more details about the contest, such as the rules and the evaulaton metrics.
Feb 11, 2011 Receive alpha (preliminary) binaries from all teams
Each team is expected to generate the solutions on 6 test benchmarks with the same input and output format by that date.
Mar 07, 2011 Start to receive final binaries from all teams.
Mar 11, 2011 Deadline for final binaries from all teams.
Mar 31-Apr 1, 2011 TAU 2011, Announce contest results.

Submission Information

  1. Please name your binary as your solver name. We do not want to have 5 "solver" and 5 "main" binaries.
  2. You do not need to print extra information if that takes runtime. All we need is the solution file and we will count the runtime ourself.
  3. If you have any special performance tuning parms, please remove them or hardcode inside your program. For the contest, we only take two options, one is for input file and one is for output file.
  4. Please provide an arg to your program so that I can specify the solution file name.
  5. Please try to submit a statically-linked binary as it helps with portability. (use gcc -static).
  6. Each team is allowed to submit a single binary that should run on all the benchmarks.
  7. For this contest we would like to evaluate single-Core versions of the power grid simulation algorithms. Hence, no parallel implementations are allowed.
  8. No precomputed information can be used to influence the current run. The run directories will be cleaned prior to each run.
  9. Purpose of alpha (preliminary) binary submission:
    The purpose of the alpha binary submission is: (a) to verify that I can run your binary on the contest machine. (b) to test that your binary is able to produce required formats on the test benchmarks.
  10. Memory limit per job: 64GB RAM, 16GB swap memory
  11. Machine configuration for the contest:
    • OS: Ubuntu 10.10. Linux 2.6.35-22-server
    • CPU: 64-bit Intel(R) Xeon(R) CPU E7210 @ 2.40GHz
    • GCC version: gcc version 4.4.5 (Ubuntu/Linaro 4.4.4-14ubuntu5)
      Target: x86_64-linux-gnu
    • BLAS: libblas-dev, 1.2-7. liblapack-dev, 3.2.1-8
    • Tcl: 8.5.8-2build1
    • Information from /proc/cpuinfo:
          processor       : 0
          vendor_id       : GenuineIntel
          cpu family      : 6
          model           : 15
          model name      : Intel(R) Xeon(R) CPU           E7210  @ 2.40GHz
          stepping        : 11
          cpu MHz         : 2398.625
          cache size      : 4096 KB
          physical id     : 3
          siblings        : 2
          core id         : 0
          cpu cores       : 2
          apicid	    : 12
          initial apicid  : 12
          fpu             : yes
          fpu_exception   : yes
          cpuid level     : 10
          wp              : yes
          flags	    : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca lahf_lm tpr_shadow vnmi flexpriority
          bogomips        : 4797.25
          clflush size    : 64
          cache_alignment : 64
          address sizes   : 40 bits physical, 48 bits virtual
          power management:

Evaluation and Ranking

Utility Scripts

Frequently Asked Questions

  1. Some of you had some questions about one of the ranking rules, specifically the rule stating that

    "if your simulator runs more than 10x of our matlab direct solver for a benchmark, it gets 400 score for this benchmark automatically even though the simulator may get exact solution".

    In order to have a basis for comparison, we knew that we needed a baseline to allow us to weigh the various performance metrics of each of your codes. As you know, the metrics we chose were CPU time, Memory, Average Error, and Maximum Error. Since each of these quantities has a different unit, we needed to normalize them to the same basis. To do this, we decided that it would not be fair to use a mature power grid analysis tool (such as the one IBM uses internally), so instead we implemented a simple and short set of small scripts to read the Spice netlists, create the corresponding matrix, and solve that matrix using Matlab.

    We believe that such a simple approach represents the "zero effort" and "zero error" baseline for how one might solve a power grid. We intend to put the scripts on the web site after the contest is finished. But in addition we firmly believe that the contestant should strive to do much better than such a simple minded approach, hence we adjusted the scoring function so that if the competing simulator runs more than ten times slower, we need to indicate that the approach is clearly impractical.
    If any of you have more questions about this, please feel free to email us.

  2. Is the simulator going to be run on a single-core CPU, a multi-core CPU, or GPU?
    For this yeart contest we would like to evaluate single-threaded versions of the simulation algorithms. The hardware acceleration part will not be considered. Each team is required to submit the binary code which can be run on a single core Linux machine. For the future contest (if there will be one), we may provide more advanced hardware platforms (i.e., GPU).

  3. Will the contest fix the simulation method that is going to be used, or we can choose any feasible methods, like Cholesky, multi-grid, and CG?

  4. Do we have to write the functions for the basic matrix computations, like Cholesky decomposition?
    It is the team's choice. If you use external library, you need to provide static code and make sure the external library can run on the platform we release.

  5. Do we have to write the parser to read the netlist for our simulator?

  6. Will the benchmark have RC information and do we need to solve transient cases?
    No. We only only focus on DC solution this year.

Benchmark Format

The input and output format of each benchmark are the same as the one published at http://dropzone.tamu.edu/~pli/PGBench/.
In the output file, the voltage at each node needs to be printed as %.4e format (4 digital after decimal points), which means the accuracy is at 0.1 mV.

Participating Teams

Call for Participation

Registration for the contest is closed.

Webmaster Zhuo Li.
Last updated: Jan 12, 2011