limetech Posted July 11, 2020 Share Posted July 11, 2020 1 hour ago, dansonamission said: Is there a way to format the cache drive(s) in the current release via command line with this 1MiB Boundary? This is fixed in next beta... just hang on a bit.. 3 Quote Link to comment
dansonamission Posted July 12, 2020 Share Posted July 12, 2020 10 hours ago, limetech said: This is fixed in next beta... just hang on a bit.. Hanging on ..... How long before it becomes an RC? I’m not running betas and only 5 days left of the trial on this system. If I can format via command then great as will know straight away if it has worked. Quote Link to comment
Allram Posted July 13, 2020 Share Posted July 13, 2020 (edited) Running the latest beta 6.9.0-beta25 Formatted my SSD's to the new partition alignment. Massive boost in speed on my Samsung 860Evo and Qvo drives! And not locking up when I do alot of transfers as they have done earlier. Edited July 13, 2020 by Allram 1 2 Quote Link to comment
mdfverona Posted August 2, 2020 Share Posted August 2, 2020 Think I was seeing this same thing on a smaller Adata SSD the SU635 / 240GB. MakeMKV docker pinned to 1 core writing just 30MB/s to cache SSD would get all cores/threads to a high iowait after 15 minutes or so. Found this thread and swapped the cache to XFS and the iowait impact is much reduced now. Reads from the SSD to array were 200-300MB/s no problem when I was moving stuff off cache so I could reformat, no iowait at all. Looking forward to update, but if you don't need a cache pool I guess just sticking with XFS isn't a bad solution seems to work well this way. Quote Link to comment
mbc0 Posted August 27, 2020 Share Posted August 27, 2020 Hi, I have 2 NVME Drives both capable of 3000 mb/s write speed Samsung 970 EVO Plus 1TB Sabrent 1TB Rocket NVMe PCIe 1TB I could write to both of those individually at 1000 mb/s using my fibre connection from my Desktop Together as a BTRFS cache pool I am only getting 230 mb/s I have installed the latest beta, removed and reinstalled both nvme drives as a cache pool and reformatted to the MBR: 1MiB-aligned I have attached my diagnostics in case anyone can help please? unraid1-diagnostics-20200827-0900.zip Quote Link to comment
mbc0 Posted August 29, 2020 Share Posted August 29, 2020 Can anyone advise if I can safely change to an xfs pool please as BTRFS is too slow Quote Link to comment
JorgeB Posted August 29, 2020 Share Posted August 29, 2020 2 minutes ago, mbc0 said: Can anyone advise if I can safely change to an xfs pool There are no multi device xfs pools, you can have single device xfs "pools", multi device pools only with btrfs, or zfs with the zfs plugin, but those won't be available on the GUI. Quote Link to comment
mbc0 Posted August 29, 2020 Share Posted August 29, 2020 29 minutes ago, johnnie.black said: There are no multi device xfs pools, you can have single device xfs "pools", multi device pools only with btrfs, or zfs with the zfs plugin, but those won't be available on the GUI. OK, Thanks for that, So it is looking like BTRFS doesn't work for me on the latest beta so is that likely to be due to the Samsung Drive? I can purchase another Sabrent Rocket? Quote Link to comment
JorgeB Posted August 29, 2020 Share Posted August 29, 2020 19 minutes ago, mbc0 said: that likely to be due to the Samsung Drive? Difficult to say if it will help, some users have bad performance with pools, others not, Samsung should work fine with the new alignment though. Quote Link to comment
mbc0 Posted August 29, 2020 Share Posted August 29, 2020 30 minutes ago, johnnie.black said: Difficult to say if it will help, some users have bad performance with pools, others not, Samsung should work fine with the new alignment though. OK, I can only presume it is because of the mix of Sabrent and Samsung If the below looks ok to you I will purchase another Sabrent to see if it makes a difference. Quote Link to comment
mbc0 Posted August 29, 2020 Share Posted August 29, 2020 5 hours ago, BRiT said: Did you newly create and format the drives in 6.9 beta or were they carry overs from earlier? On 8/27/2020 at 9:01 AM, mbc0 said: I have installed the latest beta, removed and reinstalled both nvme drives as a cache pool and reformatted to the MBR: 1MiB-aligned Quote Link to comment
mbc0 Posted August 30, 2020 Share Posted August 30, 2020 @johnnie.black I have removed the Samsung from the cache pool leaving a sole Sabrent Rocket but I am still only getting 230mb/s could this be due to the partition alignement? I can get 1000mb/s from this drive as an unassigned device or because unraid still thinks it is a pool rather than a single drive? is this prooving that the sabrent rocket is the problem here? Quote Link to comment
JorgeB Posted August 31, 2020 Share Posted August 31, 2020 19 hours ago, mbc0 said: I am still only getting 230mb/s could this be due to the partition alignement? I can get 1000mb/s from this drive as an unassigned device or because unraid still thinks it is a pool rather than a single drive? No, this just means the pool itself wasn't the reason for the performance issues. Quote Link to comment
Allram Posted August 31, 2020 Share Posted August 31, 2020 19 hours ago, mbc0 said: @johnnie.black I have removed the Samsung from the cache pool leaving a sole Sabrent Rocket but I am still only getting 230mb/s could this be due to the partition alignement? I can get 1000mb/s from this drive as an unassigned device or because unraid still thinks it is a pool rather than a single drive? is this prooving that the sabrent rocket is the problem here? Since you are running a single disk now. What speeds do you get if you format the drive to XFS? Quote Link to comment
mbc0 Posted August 31, 2020 Share Posted August 31, 2020 3 hours ago, Allram said: Since you are running a single disk now. What speeds do you get if you format the drive to XFS? OK, can't believe this but me setting up a cache pool has coincided with an update from avast that has dropped my speeds to 230mb/s! I tried a transfer from VM to VM (unraid1 to unraid2) and speeds were spot on, looked through all my Windows programs and stopped them one by one and as soon as I stopped avast speeds are back to 1000mb/s So sorry to have wasted yours and anyone else's time but hope this helps someone else at some point! 2 Quote Link to comment
guitarlp Posted February 2, 2023 Share Posted February 2, 2023 This is an old thread (apologies), but this still doesn't appear to be fixed on the latest `6.11.5`? When I switch from btrfs raid 1 to XFS non raid, can I do that encrypted, or does it have be be an unencrypted XFS disk? I read something in this thread about encryption possibly attributing to this issue. Quote Link to comment
trurl Posted February 2, 2023 Share Posted February 2, 2023 12 minutes ago, guitarlp said: switch from btrfs raid 1 to XFS non raid, can I do that encrypted You have to reformat, so new format can be encrypted or not. Quote Link to comment
guitarlp Posted February 2, 2023 Share Posted February 2, 2023 9 minutes ago, trurl said: You have to reformat, so new format can be encrypted or not. Thank you for the response. I understand that part though. I default to encrypt everything, but I'm wondering if doing an encrypted XFS drive is going to fix this, or if I need to ultimately do an unencrypted XFS disk. Normally encryption doesn't impact perceived performance... but normally raid1 doesn't cause huge iowaits with freezing docker containers :). Quote Link to comment
guitarlp Posted February 2, 2023 Share Posted February 2, 2023 On 1/15/2020 at 10:58 PM, robobub said: I had similar symptoms, using an older Samsung 830 SSD as a single Btrfs LUKS-encrypted cache. When copying very large file, iowait would hit the 80's and then at some point the system became unresponsive, and write speeds were around 80 MB/s. Howerver, moving to XFS LUKS-encrypted did not help things at all. In my case, it had to do with LUKS-encryption. Moving to non-encrypted cache, either Btrfs or XFS, iowait would be much lower, and write speeds at 200. However, I'm on an i7-3770 which has AES acceleration and have barely any CPU utilization One guess is that the 830 controller doesn't handle incompressible data as well, but looking at reviews, that's where it shined compared to Sandforce controllers. Some searching lead me to this post: Setting the IO Scheduler to none for my cache drive helped a bit, but lowering nr_requests with any IO scheduler helped more, at least in my case. This post for example. Quote Link to comment
JorgeB Posted February 3, 2023 Share Posted February 3, 2023 9 hours ago, guitarlp said: I default to encrypt everything, but I'm wondering if doing an encrypted XFS drive is going to fix this, or if I need to ultimately do an unencrypted XFS disk. Try for yourself, there's no clear reason why this issue happens to some users, so also no clear solution, another thing that might be worth trying is this: https://forums.unraid.net/topic/91883-tracking-down-iowait-cause/?do=findComment&comment=1221804 Quote Link to comment
DanielPT Posted November 8, 2023 Share Posted November 8, 2023 Hi all I got 2 x Samsung 870 SSDs in my cache pool. How can i found out if i got this problem? Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.