Late last year I switched from a 1.5tb Optane 905P to a 4tb WD Blue SN5000 NVMe drive in a gaming machine and saw improved load times, which makes sense given the read and write speeds are ~double. No observable difference otherwise.
I'm sure that's not the use case you were looking for. I could probably tease out the difference in latency with benchmarks but that's not how I use the computer.
The 905P is now in service as an SSD cache for a large media server and that came with a big performance boost but the baseline I'm comparing to is just spinning drives.
https://pcpartpicker.com/forums/topic/425127-benchmarking-op...
You can compare their benchmarks with the other almost 400 SSDs we've benchmarked. Most impressive is that three years later they are still the top random read QD1 performers, with no traditional flash SSD coming anywhere close:
https://pcpartpicker.com/products/internal-hard-drive/benchm...
They are amazing for how consistent and boring their performance is. Bit level access means no need for TRIM or garbage collection, performance doesn't degrade over time, latency is great, and random IO is not problematic.
It's so incredibly fast and responsive that the LuCI interface completely loads the moment I hit enter on the login form.
It's hard to tell you, because it's subjective, I don't swap back and forth between an SSD and the optane drives. I have my old system, which has a 2TB Samsung 980 Pro NVME drive (PCIE 4.0 x4, or 8GB/s max) as root, and a Sabrent rocket 4 plus 4TB drive secondary (also PCIE 4.0), so I ran sysbench on both systems, so I could share the differences. (Old system 5950X, new system 9950X3D).
It feels snappier, especially when doing compilations...
Sequential reads: I started with a 150GB fileset, but it was being served by the kernel cache on my newer system (256GB RAM vs 128GB on the old), so I switched to use 300GB of data, and the optanes gave me 5000 MiB/s for sequential read as opposed to 2800 MiB/s for the 980 Pro, and 4340 MiB/s for the Rocket 4 Plus.
Random writes alone (no read workload) The optane system gets 2184 MiB/s, the 980 Pro gets 32 MiB/s, and the Rocket 4 Plus gets 53 MiB/s.
Mixed workload (random read/write) The optanes get 725/483 as opposed to 9/6 for the 980 Pro, and 42/28 for the Rocket 4 Plus.
2x1.5TB Optane Raid0: Prep time: `sysbench fileio --file-total-size=150G prepare` 161061273600 bytes written in 50.41 seconds (3047.27 MiB/sec).
Benchmark:
`sysbench fileio --file-total-size=150G --file-test-mode=rndrw --max-time=60 --max-requests=0 run`
WARNING: --max-time is deprecated, use --time instead
sysbench 1.0.20 (using system LuaJIT 2.1.1741730670)
Running the test with following options:
Number of threads: 1
Initializing random number generator from current time
Extra file open flags: (none)
128 files, 1.1719GiB each
150GiB total file size
Block size 16KiB
Number of IO requests: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Initializing worker threads...
Threads started!
File operations:
reads/s: 46421.95
writes/s: 30947.96
fsyncs/s: 99034.84
Throughput:
read, MiB/s: 725.34
written, MiB/s: 483.56
General statistics:
total time: 60.0005s
total number of events: 10584397
Latency (ms):
min: 0.00
avg: 0.01
max: 1.32
95th percentile: 0.03
sum: 58687.09
Threads fairness:
events (avg/stddev): 10584397.0000/0.00
execution time (avg/stddev): 58.6871/0.00
2TB Nand Samsung 980 Pro:
Prep time:
`sysbench fileio --file-total-size=150G prepare`
161061273600 bytes written in 87.15 seconds (1762.53 MiB/sec). Benchmark:
`sysbench fileio --file-total-size=150G --file-test-mode=rndrw --max-time=60 --max-requests=0 run`
WARNING: --max-time is deprecated, use --time instead
sysbench 1.0.20 (using system LuaJIT 2.1.1741730670)
Running the test with following options:
Number of threads: 1
Initializing random number generator from current time
Extra file open flags: (none)
128 files, 1.1719GiB each
150GiB total file size
Block size 16KiB
Number of IO requests: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Initializing worker threads...
Threads started!
File operations:
reads/s: 594.34
writes/s: 396.23
fsyncs/s: 1268.87
Throughput:
read, MiB/s: 9.29
written, MiB/s: 6.19
General statistics:
total time: 60.0662s
total number of events: 135589
Latency (ms):
min: 0.00
avg: 0.44
max: 15.35
95th percentile: 1.73
sum: 59972.76
Threads fairness:
events (avg/stddev): 135589.0000/0.00
execution time (avg/stddev): 59.9728/0.00
4TB Sabrent Rocket 4 Plus:
Prep time:
`sysbench fileio --file-total-size=300G prepare`
322122547200 bytes written in 152.39 seconds (2015.92 MiB/sec). Benchmark:
`sysbench fileio --file-total-size=300G --file-test-mode=rndrw --max-time=60 --max-requests=0 run`
WARNING: --max-time is deprecated, use --time instead
sysbench 1.0.20 (using system LuaJIT 2.1.1741730670)
Running the test with following options:
Number of threads: 1
Initializing random number generator from current time
Extra file open flags: (none)
128 files, 2.3438GiB each
300GiB total file size
Block size 16KiB
Number of IO requests: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Initializing worker threads...
Threads started!
File operations:
reads/s: 2690.28
writes/s: 1793.52
fsyncs/s: 5740.92
Throughput:
read, MiB/s: 42.04
written, MiB/s: 28.02
General statistics:
total time: 60.0155s
total number of events: 613520
Latency (ms):
min: 0.00
avg: 0.10
max: 8.22
95th percentile: 0.32
sum: 59887.69
Threads fairness:
events (avg/stddev): 613520.0000/0.00
execution time (avg/stddev): 59.8877/0.00