world leader in high performance signal processing
Trace: » bonnie

Bonnie++

Bonnie++ is widely used as benchmark suite to test hard drivers and file system.

Build

Configure uclinux-dist to build Bonnie++.

Miscellaneous Applications  --->
   --- Benchmarks   
   [*] bonnie++ 

Run Bonnie++

For details of each option, please refer to bonnie++ manual page.

Test file I/O and file creation
root:/> bonnie++ -u root -d /mnt
Bypass write buffer

Use fsync() after each write:

root:/> bonnie++ -u root -d /mnt -b
Skip file creation test
root:/> bonnie++ -u root -d /mnt -n 0

Result

For details, please refer to bonnie++/readme.html. Sample output in txt format:

Version  1.94       ------Sequential Output------ --Sequential Input- --Random- 
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- 
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP 
blackfin       300M    48  99  6224  75  1782  37   124  99  2874  26 100.4   5 
Latency               348ms     428ms     936ms   84000us     788ms     543ms   
Version  1.94       ------Sequential Create------ --------Random Create-------- 
blackfin            -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- 
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP 
                 16   233  99  6543 100  3690  89   235  99  6907 100   825  98 
Latency             79999us    4000us    4000us     148ms    4000us    4000us 
  • The file IO tests are:
    • Sequential Output
      • Per-Character. The file is written using the putc() stdio macro. The loop that does the writing should be small enough to fit into any reasonable I-cache. The CPU overhead here is that required to do the stdio code plus the OS file space allocation.
      • Block. The file is created using write(2). The CPU overhead should be just the OS file space allocation.
      • Rewrite. Each BUFSIZ of the file is read with read(2), dirtied, and rewritten with write(2), requiring an lseek(2). Since no space allocation is done, and the I/O is well-localized, this should test the effectiveness of the filesystem cache and the speed of data transfer.
    • Sequential Input
      • Per-Character. The file is read using the getc() stdio macro. Once again, the inner loop is small. This should exercise only stdio and sequential input.
      • Block. The file is read using read(2). This should be a very pure test of sequential input performance.
    • Random Seeks. This test runs SeekProcCount processes (default 3) in parallel, doing a total of 8000 lseek()s to locations in the file specified by random() in bsd systems, drand48() on sysV systems. In each case, the block is read with read(2). In 10% of cases, it is dirtied and written back with write(2). The idea behind the SeekProcCount processes is to make sure there's always a seek queued up. For any unix filesystem, the effective number of lseek(2) calls per second declines asymptotically to near 30, once the effect of caching is defeated. The size of the file has a strong nonlinear effect on the results of this test. Many Unix systems that have the memory available will make aggressive efforts to cache the whole thing, and report random I/O rates in the thousands per second, which is ridiculous.
  • The file creation tests use file names with 7 digits numbers and a random number (from 0 to 12) of random alpha-numeric characters. For the sequential tests the random characters in the file name follow the number. For the random tests the random characters are first.
    • The sequential tests involve creating the files in numeric order, then stat()ing them in readdir() order (IE the order they are stored in the directory which is very likely to be the same order as which they were created), and deleting them in the same order.
    • For the random tests we create the files in an order that will appear random to the file system (the last 7 characters are in numeric order on the files). Then we stat() random files (NB this will return very good results on file systems with sorted directories because not every file will be stat()ed and the cache will be more effective). After that we delete all the files in random order. If a maximum size greater than 0 is specified then when each file is created it will have a random amount of data written to it. Then when the file is stat()ed it's data will be read.

Format

Bonnie++ provide tools bon_csv2html and bon_csv2txt to convert CSV (Comma Seperated Values) like bellow to HTML or TXT format.

1.93c,1.94,ubuntu-pc,1,1230719176,2G,,683,97,23860,4,11986,3,1238,97,29638,3,145.7,2,16,,,,,1975,56,+++++,+++,+++++,+++,3111,87,+++++,+++,8685,71,20720us,438ms,351ms,77591us,22671us,649ms,12952us,84us,200us,12481us,227us,523us

HTML format:

Bonnie++ Benchmark results
Version 1.94Sequential OutputSequential InputRandom
Seeks
Sequential CreateRandom Create
ConcurrencySizePer CharBlockRewritePer CharBlockNum FilesCreateReadDeleteCreateReadDelete
K/sec% CPUK/sec% CPUK/sec% CPUK/sec% CPUK/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU
ubuntu-pc12G68397238604119863123897296383145.7216197556++++++++++++++++311187++++++++868571
ubuntu-pcLatency20720us438ms351ms77591us22671us649msLatency12952us84us200us12481us227us523us