Hacker News new | past | comments | ask | show | jobs | submit login

I think on recent kernels, using the hipri option doesn't get you interrupt-free polled IO unless you've configured the nvme driver to allocate some queues specifically for polled IO. Since these Samsung drives support 128 queues and you're only using a 16C/32T processor, you have more than enough for each drive to have one poll queue and one regular IO queue allocated to each (virtual) CPU core.



That would explain it. Do you recommend any docs/links I should read about allocating queues for polled IO?


It's terribly documented :(. You need to set the nvme.poll_queues to the number of queues you want, before the disks are attached. I.e. either at boot, or you need to set the parameter and then cause the NVMe to be rescanned (you can do that in sysfs, but I can't immediately recall the steps with high confidence).


Ah, yes, shell history ftw. Of course you should ensure no filesystem is mounted or such:

    root@awork3:~# echo 4 > /sys/module/nvme/parameters/poll_queues
    root@awork3:~# echo 1 > /sys/block/nvme1n1/device/reset_controller
    root@awork3:~# dmesg -c
    [749717.253101] nvme nvme1: 12/0/4 default/read/poll queues
    root@awork3:~# echo 8 > /sys/module/nvme/parameters/poll_queues
    root@awork3:~# dmesg -c
    root@awork3:~# echo 1 > /sys/block/nvme1n1/device/reset_controller
    root@awork3:~# dmesg -c
    [749736.513102] nvme nvme1: 8/0/8 default/read/poll queues


Thanks for the pointers, I'll bookmark this and will try it out someday.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: