On 28/06/10 12:09, Wayne Stallwood wrote:
BTW in case it isn't obvious this totally kills Linux MD array performance as well so it's worth checking on any machine.
Out of curiousity, I just tested one of my boxes with an MD array:
[root@nas ~]# hdparm -tT /dev/sda /dev/sda: Timing cached reads: 3584 MB in 2.00 seconds = 1792.19 MB/sec Timing buffered disk reads: 226 MB in 3.01 seconds = 75.04 MB/sec
[root@nas ~]# hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 3512 MB in 2.00 seconds = 1756.13 MB/sec Timing buffered disk reads: 246 MB in 3.00 seconds = 82.00 MB/sec
[root@nas ~]# hdparm -tT /dev/sda & [root@nas ~]# hdparm -tT /dev/sdb & /dev/sda: Timing cached reads: 3676 MB in 2.00 seconds = 1838.78 MB/sec Timing buffered disk reads: 240 MB in 3.02 seconds = 79.55 MB/sec /dev/sdb: Timing cached reads: 3620 MB in 2.00 seconds = 1810.69 MB/sec Timing buffered disk reads: 248 MB in 3.01 seconds = 82.30 MB/sec
(Output reformatted for readability.) That's quite a significant speed increase in cached reads when testing both together. So I re-ran the tests, and single drive tests are consistent but now I'm getting combined test results varying wildly: I just got 1277/1300 (87/83) MB/sec, then 1023/1124 (82/72), then 1236/1239 (94/91). The drives are identical Samsung models.
Should I read anything into these results?