brando56894 Posted July 20, 2017 Share Posted July 20, 2017 I have a 256 GB Samsung 960 Evo attached to the M2 slot on my SuperMicro X10SDV-F-0 and I've had nothing but problems with it in unRAID. It works fine in FreeNAS, Arch Linux, Ubuntu Linux, and CentOS 7.3; but in unRAID it will work fine for about an hour under heavy load and then BTRFS will freak out, and mark the system Read Only. If I put a KVM HDD image on it, it will corrupt it after an hour or two of heavy IO. I switched to XFS and there was no corruption, but there were still issues with it being slow. I switched to a SATA SSD and have had no issues under heavy load. 1 Quote Link to comment
RifleJock Posted July 20, 2017 Share Posted July 20, 2017 (edited) Well, currently I have the 750 unplugged from my system as it was causing high CPU from all of the continuous error logging. That and it would crash the webgui after a day or two. I used the 750 with 4 WD 1tb 10,000rpm drives in my cache pool. xfs automatically configured a raid 1 across the 5 drives. I'm not sure when I actually started getting errors with the SSD but I'm almost certain its because I changed the cache pool to btrfs for some of those cool features. I did use my system mainly for being a man in the middle for eth mining with multiple gpu's but it was hosted from a friend of mines house (off site for me), the thought when I built the "Server" is that it would be good for rendering/streaming/the occasional vm/ and LAN Game Caching. But because of all the issues, no eth mining The longest I ever got the machine to run and still be responsive was around 7 to 9 days before having to physically shut it off. Because of the issues I have had, I have since brought the server back to my house to monitor its status better. I actually have a 960 256G Evo as well. It is currently in my gaming rig, but I could possibly try it out in unRAID to see how it acts for you if you want. I would likely attempt btrfs in my cache pool with it. On the topic of speeds, I have never benchmarked my speeds, but from what I have seen through unRAID, I have never achieved anything very fast at all. BUT so far, I havn't really done anything more than network transfers. At one point I did have windows v-machines on there. And I noticed slow-ish response times. (over 1000~3000ms for drive activity) but I figured since its 4 HDD's and an SSD in a block-lvl raid 1 through a vm hosted on unRAID that would be the case. I do have the benchmark plugin now so I will have to test the drives after the turbo mover and party checks finish. (Checks take me around 18~23 hours to complete with 4x Seagate 8TB--one as parity, three as disk--drives. they get around 150~350MB/s speeds, and upwards of 450MB/s in turbo mover. I know my pool is capable of well 210~250x4+1200 MB/s of sequential writes, and about doubled in reads. Which I have seen done in Windows. But I don't think I have ever seen my cache pool ever get over ~100MB/s writes. Not sure if unRAID is actually reporting their speeds correctly. I can however transfer things to my "NAS" which is a share on the cache pool. I see ~110MB/s without a hiccup. Which is accurate to 1000BASE-T Network Transfers. (1000Mb/s=125MB/s then consider overhead) Does your log show errors from your 960 evo? Mine was quite obvious that it was the 750 u.2 SSD as my U.2 port on the mobo shares bandwidth on PCIe 3_x16 and the logs showed something along the lines of 0:3.00.00 or something. According to many forums I have read on my issue. It looks to be a NVMe issue through many Ubuntu versions. People often confused it for their gtx 1080's as they were also in the same slot sharing bandwidth with an m.2 or u.2. Most believed it was bios issues on the drives, GPU, and mobo. But I have yet to have any problems with my 1080ti, 980, and 530 as well as a friends 750ti. Upon researching those issues, I have come to the assumption that unRAID doesn't support NVMe, or at least didn't until recently. Nor did most versions of Ubuntu or Arch Linux. OH! Also forgot to stress that these issues are all occurring on Other than unRAID, any installation other than windows has never been installed on any of my hardware. I have always ran a VM usually through Hyper-X for that. So I wouldn't know if I have ever had any errors like this before. @brando56894 Edited July 20, 2017 by RifleJock add @brando56894 Quote Link to comment
JorgeB Posted July 20, 2017 Share Posted July 20, 2017 I'm using a Toshiba/OCZ RD400 512GB for months without issues, other than the temperature not being displayed. Quote Link to comment
RifleJock Posted July 20, 2017 Share Posted July 20, 2017 What type of motherboard? @johnnie.black Quote Link to comment
JorgeB Posted July 20, 2017 Share Posted July 20, 2017 2 minutes ago, RifleJock said: What type of motherboard? @johnnie.black Supermicro X11SSM-F, the RD400 I have is the AIC version, with the PCIe adapter included. Quote Link to comment
brando56894 Posted July 22, 2017 Share Posted July 22, 2017 On 7/20/2017 at 11:07 AM, RifleJock said: I have come to the assumption that unRAID doesn't support NVMe, or at least didn't until recently. Nor did most versions of Ubuntu or Arch Linux. OH! Also forgot to stress that these issues are all occurring on I did have errors in dmesg when I was actually using it as a cache drive, but no that it is formatted but unused I see no such errors. Although unRAID has been highly unstable for me, I'm going to completely remove it and see if it fixes the issue. I've used my NVMe drive in Arch and Ubuntu with no issues. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.