DelSol
-
Posts
13 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by DelSol
-
-
Hi there,
so in the Disk Settings the
Default warning disk utilization threshold (%):
and
Default critical disk utilization threshold (%):
are in integer values. But for my 10TB drive, 99% means still 100GB left. Floating point values are not allowed.
Is there a workaround or can this be added as a feature request for future releases?
Also a GB value could sometimes be more useful than percentage when you have mixed devices like 1TB and 10TB
-
I'm on latest stable. 6.7.0 is this something new in the beta?
If not, then it's probably caused by my old PCIe controller... -
So for everyone this is (probably) the solution to my issues:
First of all my ssd is encrypted via LUKS, I did not know that this makes any difference.LUKS itself disables any TRIM commands for security reasons.
I don't know why I'm getting hundreds of error messages for every block TRIM tries to modify while others just get one error message like "trim not supported" or "device not accessable".
Also I never had any issues with my encrypted sata ssd running trim...?
anyway, I'll stick with my corsair ssd for now, disabled the TRIM cron and will watch its performance the next weeks.
-
It seems like in most other cases TRIM does not work at all, for me it's only some sectors per run failing...
So should I switch to another SSD?
Samsung 1TBs are pretty expensive, the intel 660p is about the same price as the corsair, but slower in I/O and other benchmarks.
Is it really worth switching?
-
So I bought a second SSD. same brand same model same size and the -Error information log entries were gone for days.
Then my ssd trim cron kicked in and boom, same "issue". Getting the same I/O errors and my -Error information log entries started to rise.
So I am still not sure if this is ignorable. My system runs fine though.
Maybe worth mentioning is, that my server does not have a dedicated M.2 slot, but I used an adapter to PCIe. Tried two models, no difference.
-
My disks seem as fast(or slow) as they used to be. If the test is new, maybe my drive cache has been disabled before?!
-
Same here, after a reboot all 4 of my disks have disabled caches.
any information on this?
-
Had this error again last night. My SSD-Trim chron runs at 2AM so it was probably caused by that.
so should I replace my device or is this still okay? Ca anyone confirm?
May 28 21:50:12 unRAID emhttpd: shcmd (471): mount -t xfs -o noatime,nodiratime /dev/mapper/nvme0n1p1 /mnt/cache
May 29 02:00:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 266962616
May 29 02:00:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 275351223
May 29 02:00:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 283739830
May 29 02:00:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 292128437
May 29 02:00:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 300517044
May 29 02:00:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 308905651
May 29 02:00:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 317294258
May 29 02:00:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 325682865
May 29 02:00:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 334071472
May 29 02:00:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 342460079 -
So now I had this in my drive log:
May 28 11:33:25 unRAID emhttpd: shcmd (471): mkfs.xfs -m crc=1,finobt=1 -f /dev/mapper/nvme0n1p1
May 28 11:33:25 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 4160
May 28 11:33:25 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 8392767
May 28 11:33:25 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 16781374
May 28 11:33:25 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 25169981
May 28 11:33:26 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 33558588
May 28 11:33:26 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 41947195
May 28 11:33:26 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 50335802
May 28 11:33:26 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 58724409
May 28 11:33:26 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 67113016
May 28 11:33:26 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 75501623
May 28 11:33:26 unRAID root: meta-data=/dev/mapper/nvme0n1p1 isize=512 agcount=4, agsize=58605652 blks
May 28 11:33:26 unRAID emhttpd: shcmd (473): mount -t xfs -o noatime,nodiratime /dev/mapper/nvme0n1p1 /mnt/cacheand I will also add my full SMART report here.
The device is a Corsair Force MP510 960GB
-
Hello guys,
so I replaced my old SATA SSD Cache Drive with a shiny new NVMe M.2 SSD.
Unraids drive log tells me this:
-Power on hours 41 (1d, 17h)
-Unsafe shutdowns 64
-Media and data integrity errors 0
-Error information log entries 2,016
Should I be worried about the error information logs? count is still rising. I searched the internet for this message and some say it's totally normal and ignorable.
thanks for your help
-
Thank you jonathan,
so I will not loose my data/file structure/shares or anything when I delete my array config?
what about file permissions on the new drive, do I have to note something here? -
Hello,
I don't use parity because I run backups every night to my other server and I like the speed-advantages of having no parity.
I have 3x5TB and 1x3TB drives and I would like to replace the 3TB with a 5TB.
How can I do this?
I'm using the LUKS encryption, can I add the new drive as 5th drive, encrypt it, copy everything and then replace the 3tb with it in the array setup?
Default critical disk utilization threshold
in General Support
Posted
thanks for your advice, but I really want to squeeze the last bits out of my hard drives, no parity needed. but I still want to get a warning at about 20GB free space left or so. is there no way to get this?