Gå til innhold

SSD-benche tråden: Info og Diskusjon, Tweaking og Benchmarks av SSD


Ourasi

Anbefalte innlegg

  • 4 uker senere...
Videoannonse
Annonse

 

2x Sm 961nvme 1TB Raid-0  :xmas:
 
 
 
 
 
Nei dette er IKKE ram/ cache test ;)

 

 

 

Har installert sm951nvme i min rigg, men får blandete resultater, måtte du til med nye drivere?

 

CrystalDiskMark gir meg forventede resultater men både Anvil og AS SSD gir meg totalt feile skrive resultater, vi snakker da skrive hastighet nede i under 1MB/s på Seq, som gjør at jeg må rett og slett avbryte før testene blir ferdig.

 

1x sm951nvme gir meg følgende i CrystalDiskMark:

  • Seq Q32T1 - 2261/1592 (R/W)
  • 4K Q32T1 - 705/421
  • Seq - 1744/1569
  • 4K - 48/196 
Endret av j0achim
Lenke til kommentar

 

 

2x Sm 961nvme 1TB Raid-0  :xmas:
 
 
 
 
 
Nei dette er IKKE ram/ cache test ;)

 

 

 

Har installert sm951nvme i min rigg, men får blandete resultater, måtte du til med nye drivere?

 

CrystalDiskMark gir meg forventede resultater men både Anvil og AS SSD gir meg totalt feile skrive resultater, vi snakker da skrive hastighet nede i under 1MB/s på Seq, som gjør at jeg må rett og slett avbryte før testene blir ferdig.

 

1x sm951nvme gir meg følgende i CrystalDiskMark:

  • Seq Q32T1 - 2261/1592 (R/W)
  • 4K Q32T1 - 705/421
  • Seq - 1744/1569
  • 4K - 48/196 

 

 

Les første og andre post her:

 

https://www.diskusjon.no/index.php?showtopic=1686224

  • Liker 1
Lenke til kommentar
  • 1 år senere...

Har fått testa Intel DC P4800X idag. Testene er utført på FreeBSD 11.1-RELEASE

 

pg_test_fsync

5 seconds per test
O_DIRECT supported on this platform for open_datasync and open_sync.

Compare file sync methods using one 8kB write:
(in wal_sync_method preference order, except fdatasync is Linux's default)
        open_datasync                                   n/a
        fdatasync                         24074.895 ops/sec      42 usecs/op
        fsync                             13343.773 ops/sec      75 usecs/op
        fsync_writethrough                              n/a
        open_sync                          9210.815 ops/sec     109 usecs/op

Compare file sync methods using two 8kB writes:
(in wal_sync_method preference order, except fdatasync is Linux's default)
        open_datasync                                   n/a
        fdatasync                         21576.463 ops/sec      46 usecs/op
        fsync                             12289.219 ops/sec      81 usecs/op
        fsync_writethrough                              n/a
        open_sync                          4383.573 ops/sec     228 usecs/op

Compare open_sync with different write sizes:
(This is designed to compare the cost of writing 16kB in different write
open_sync sizes.)
         1 * 16kB open_sync write          8741.000 ops/sec     114 usecs/op
         2 *  8kB open_sync writes         4413.492 ops/sec     227 usecs/op
         4 *  4kB open_sync writes         2213.384 ops/sec     452 usecs/op
         8 *  2kB open_sync writes         1106.070 ops/sec     904 usecs/op
        16 *  1kB open_sync writes          553.425 ops/sec    1807 usecs/op

Test if fsync on non-write file descriptor is honored:
(If the times are similar, fsync() can sync data written on a different
descriptor.)
        write, fsync, close               11355.020 ops/sec      88 usecs/op
        write, close, fsync               10877.350 ops/sec      92 usecs/op

Non-sync'ed 8kB writes:
        write                            210390.001 ops/sec       5 usecs/op
diskinfo(ZFS slog)

Synchronous random writes:
0.5 kbytes: 85.1 usec/IO = 5.7 Mbytes/s
1 kbytes: 92.4 usec/IO = 10.6 Mbytes/s
2 kbytes: 98.5 usec/IO = 19.8 Mbytes/s
4 kbytes: 101.5 usec/IO = 38.5 Mbytes/s
8 kbytes: 104.5 usec/IO = 74.7 Mbytes/s
16 kbytes: 106.7 usec/IO = 146.4 Mbytes/s
32 kbytes: 89.5 usec/IO = 349.2 Mbytes/s
64 kbytes: 143.9 usec/IO = 434.3 Mbytes/s
128 kbytes: 234.1 usec/IO = 534.0 Mbytes/s
256 kbytes: 378.6 usec/IO = 660.3 Mbytes/s
512 kbytes: 530.1 usec/IO = 943.3 Mbytes/s
1024 kbytes: 916.3 usec/IO = 1091.4 Mbytes/s
2048 kbytes: 1820.9 usec/IO = 1098.3 Mbytes/s
4096 kbytes: 3570.3 usec/IO = 1120.3 Mbytes/s
8192 kbytes: 7085.1 usec/IO = 1129.1 Mbytes/s
fio sync write 4kb 1 job

fio --filename=/dev/nvd1 --direct=1 --sync=1 --rw=write --bs=4k --numjobs=1 --iodepth=1 --runtime=60 --time_based --group_reporting --name=journal-test
journal-test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1
fio-3.1
Starting 1 process
Jobs: 1 (f=1): [W(1)][100.0%][r=0KiB/s,w=226MiB/s][r=0,w=57.9k IOPS][eta 00m:00s]
journal-test: (groupid=0, jobs=1): err= 0: pid=53375: Wed Oct 18 16:35:37 2017
write: IOPS=57.7k, BW=225MiB/s (236MB/s)(13.2GiB/60000msec)
clat (usec): min=13, max=560, avg=15.99, stdev= 6.16
lat (usec): min=13, max=560, avg=16.07, stdev= 6.16
clat percentiles (nsec):
| 1.00th=[13888], 5.00th=[14016], 10.00th=[14144], 20.00th=[14144],
| 30.00th=[14144], 40.00th=[14272], 50.00th=[14272], 60.00th=[14400],
| 70.00th=[14400], 80.00th=[14528], 90.00th=[18560], 95.00th=[30080],
| 99.00th=[47360], 99.50th=[49920], 99.90th=[63744], 99.95th=[70144],
| 99.99th=[85504]
bw ( KiB/s): min=225204, max=230642, per=98.63%, avg=227649.37, stdev=662.77, samples=119
iops : min=56301, max=57660, avg=56911.97, stdev=165.75, samples=119
lat (usec) : 20=92.84%, 50=6.67%, 100=0.49%, 250=0.01%, 750=0.01%
cpu : usr=10.22%, sys=26.17%, ctx=3462223, majf=0, minf=0
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwt: total=0,3462157,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
WRITE: bw=225MiB/s (236MB/s), 225MiB/s-225MiB/s (236MB/s-236MB/s), io=13.2GiB (14.2GB), run=60000-60000msec
fio sync write 4kb 16 jobs

fio --filename=/dev/nvd1 --direct=1 --sync=1 --rw=write --bs=4k --numjobs=16 --iodepth=1 --runtime=60 --time_based --group_reporting --name=journal-test
journal-test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1
...
fio-3.1
Starting 16 processes
Jobs: 16 (f=16): [W(16)][100.0%][r=0KiB/s,w=1919MiB/s][r=0,w=491k IOPS][eta 00m:00s]
journal-test: (groupid=0, jobs=16): err= 0: pid=53380: Wed Oct 18 16:37:49 2017
write: IOPS=469k, BW=1833MiB/s (1922MB/s)(107GiB/60001msec)
clat (usec): min=14, max=2656, avg=32.42, stdev=16.35
lat (usec): min=14, max=2656, avg=32.57, stdev=16.35
clat percentiles (usec):
| 1.00th=[ 19], 5.00th=[ 20], 10.00th=[ 21], 20.00th=[ 22],
| 30.00th=[ 23], 40.00th=[ 24], 50.00th=[ 26], 60.00th=[ 28],
| 70.00th=[ 32], 80.00th=[ 44], 90.00th=[ 58], 95.00th=[ 66],
| 99.00th=[ 90], 99.50th=[ 99], 99.90th=[ 121], 99.95th=[ 130],
| 99.99th=[ 184]
bw ( KiB/s): min=67121, max=125237, per=6.24%, avg=117076.99, stdev=9138.57, samples=1904
iops : min=16780, max=31309, avg=29268.98, stdev=2284.61, samples=1904
lat (usec) : 20=6.55%, 50=79.29%, 100=13.70%, 250=0.45%, 500=0.01%
lat (usec) : 750=0.01%, 1000=0.01%
lat (msec) : 2=0.01%, 4=0.01%
cpu : usr=6.85%, sys=20.10%, ctx=31796952, majf=0, minf=0
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwt: total=0,28155381,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
WRITE: bw=1833MiB/s (1922MB/s), 1833MiB/s-1833MiB/s (1922MB/s-1922MB/s), io=107GiB (115GB), run=60001-60001msec
Så konklusjon, ca 1000 ganger raskere enn Samsung 960 Pro Endret av siDDis
Lenke til kommentar
  • Uderzo avklistret denne emne

Opprett en konto eller logg inn for å kommentere

Du må være et medlem for å kunne skrive en kommentar

Opprett konto

Det er enkelt å melde seg inn for å starte en ny konto!

Start en konto

Logg inn

Har du allerede en konto? Logg inn her.

Logg inn nå
×
×
  • Opprett ny...