dnoyeb Posted March 12, 2018 Share Posted March 12, 2018 I've been able to reproduce on multiple machines with multiple samsung ssd's that the limitation of the starting block at 64 causes issues with the 840 and 850 (possibly 860) samsung SSD's. They have a non-standard NAND Erase block size of 1536k instead of 1024k. It causes poor performance with uneven write speeds to the drives. Please allow us to set the starting block size for the BTRFS pool drives when building. Thanks! 2 Quote Link to comment
thomast_88 Posted March 12, 2018 Share Posted March 12, 2018 Anyway to set this manually? I too suffer from the poor performance problems with BTRFS @ Samsung 850 SSDS Quote Link to comment
-Daedalus Posted March 15, 2018 Share Posted March 15, 2018 Has this something to do with the system responsiveness issue during large writes to the cache, I wonder? Quote Link to comment
methanoid Posted March 19, 2018 Share Posted March 19, 2018 Would be nice if @limetech chipped in here. I have a 480Gb Toshiba and was going to add a 500Gb Samsung to make a cache pool but a bit worried to do so now. Quote Link to comment
limetech Posted March 20, 2018 Share Posted March 20, 2018 On 3/19/2018 at 7:47 AM, methanoid said: Would be nice if @limetech chipped in here. I have a 480Gb Toshiba and was going to add a 500Gb Samsung to make a cache pool but a bit worried to do so now. Interesting, yeah we'll look into this. 2 Quote Link to comment
dnoyeb Posted March 21, 2018 Author Share Posted March 21, 2018 @limetech just for reference, I was able to reproduce the same issue with multiple Samsung SSD's and there is a thread where others had the same issue back in 2017 when using samsung's in their cache pools. Since removing my samsung from the pool and swapping to XFS; my machine is working like a boss with no issues whatsoever... details showing the samsung write issues were in this other thread: If you start reading this post: and read down a few more posts, you'll see where I did tests and have graphs etc showing the write performance differences to the Samsung device when going from the starting block of 64 up to 2048. Quote Link to comment
limetech Posted March 21, 2018 Share Posted March 21, 2018 3 minutes ago, dnoyeb said: Since removing my samsung from the pool and swapping to XFS; my machine is working like a boss with no issues whatsoever. Thanks for the info. Haven't read through all that yet, but would you speculate: a) samsung with btrfs cache disk => issue exists b) samsung with xfs cache disk => issue does not exist Quote Link to comment
dnoyeb Posted March 21, 2018 Author Share Posted March 21, 2018 I completely removed the samsung and went with a sandisk in the end since I saw issues writing to the samsung via unassigned devices with the wrong starting block. I will try and pull the sandisk and swap to the samsung this weekend as a test to see.... One other item of note is I have 256gb of ram; some speculated that was introducing another factor into things. Quote Link to comment
Warrentheo Posted March 23, 2018 Share Posted March 23, 2018 (edited) I have 2x 960 EVO's in Raid-0 cache... Just checked my starting blocks and they were both at zero... Possibly because I followed these steps before formatting them: https://wiki.archlinux.org/index.php/Solid_State_Drive Quote Trim an entire device If you want to trim your entire SSD at once, e.g. for a new install, or you want to sell your SSD, you can use the blkdiscard command, which will instantly discard all blocks on a device. Warning: all data on the device will be lost! # blkdiscard /dev/sdX Which I did with these commands: blkdiscard /dev/nvme0n1 blkdiscard /dev/nvme1n1 Before formatting them with the webgui... EDIT: Come to think of it, I think I had unusually low performance the first time I tried it... I then tried the above while troubleshooting the issue and may have accidentally fixed it because of it... Also might this have something to do with the difference in size between GPT/MBR and having no partition table? Edited March 23, 2018 by Warrentheo 1 Quote Link to comment
Peanutman85 Posted May 3, 2018 Share Posted May 3, 2018 On 3/21/2018 at 2:14 PM, limetech said: Thanks for the info. Haven't read through all that yet, but would you speculate: a) samsung with btrfs cache disk => issue exists b) samsung with xfs cache disk => issue does not exist I also have this issue with a Samsung cache disk. It was btrfs, so to try to resolve it, I reformatted it as xfs, but I'm still experiencing the problem. Quote Link to comment
the_larizzo Posted June 30, 2018 Share Posted June 30, 2018 On 3/22/2018 at 9:42 PM, Warrentheo said: I have 2x 960 EVO's in Raid-0 cache... Just checked my starting blocks and they were both at zero... Possibly because I followed these steps before formatting them: How do you validate the starting block? Quote Link to comment
Warrentheo Posted July 5, 2018 Share Posted July 5, 2018 On 6/30/2018 at 5:50 PM, the_larizzo said: How do you validate the starting block? fdisk -l should usually not be 0, on mine it is 64... Depends on the drive you have... If you don't have speed issues with your drives, then you should not worry about this stuff... Quote Link to comment
unabletoconnect Posted April 15, 2019 Share Posted April 15, 2019 So I've just been trying to work out why i get massive CPU spikes and slow writes to my cache. I've just recently added a Samsung drive in BTRFS raid 1 with a WD ssd.....Has there been any progress with this issue? or am I out of luck? Quote Link to comment
thomast_88 Posted October 27, 2019 Share Posted October 27, 2019 Anyway to set this manually? Quote Link to comment
glennv Posted October 27, 2019 Share Posted October 27, 2019 trying to follow this but confused. I have 4 samsung 860 evo 1TB ssd in raid 10 as cache and have great read and write speed accessing them over a 10G network. Average 450-500MB/s write and 800MB/s read. Also when i had only 2 drives in raid 1 all was great. Did not do anything special and just had unraid create the cache in default btrfs settings. What am i missing ? If it was a wrong default formatting option for samsungs wouldnt i also be affected ? Is there another factor at play here why only some are affected ? ps starting blocks all at 64 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.