sean

Members
  • Posts

    9
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

sean's Achievements

Noob

Noob (1/14)

0

Reputation

  1. High availability between multiple Unraid nodes. No way this gets implemented any time soon but would be pretty amazing.
  2. I'd like to say that I am in support of this change. So long as the original promises are maintained to their current users with Pro licenses. I want Unraid to continue as an option in the home nas/server field. Without continual funding eventually they will not be able to continue. The only thing I worry about is the possibility of doing the same thing that has happened to Plex. Ignoring the users wants and implementing features that no one asked for. So far that hasn't happened and it seems to me Limetech actually cares about what their users ask for in the forums, even if sometimes they can't make everything we ask for happen. Honestly, I'd go as far as saying that there really shouldn't be a one time payment option for perpetual access to updates. I'm glad they have it for those that really want it, but it could lead to not enough cash flow. I believe this change will allow them to make more progress and better the product, but only time will tell. I'd be interested to hear others thoughts if they believe differently. ps. Are we going to get that second array option? ZFS is cool but the scalability of the main array is amazing.
  3. Been awhile. During my testing I was unable to solve the issues related to the NVME drives dropping out while all four were in Raid 10 with btrfs as the filesystem. I ended up having to move back to XFS with all four drives as independent cache pools. Also, I do find it humorous that soon after I started testing raid cache pools ZFS was officially supported by Unraid. 🤷‍♂️ Unfortunately, during the intermediate time frame before ZFS was officially supported I needed to upgrade one of the NVME drives from 1TB to 4TB which limits my ability to use it in a ZFS pool. So in lieu of moving a ton of data around just to go back to a smaller drive I will instead be upgrading the other three NVME drives over time and then will use Raid 10 or Raid Z2 depending on my storage size needs at the time. I'm not really sure why I had such a bad experience with btrfs, and I'm sure others have had great success with it. But, my option is that ZFS is probably safer and more stable anyways. Thanks for the help @JorgeB.
  4. On it, will let you know the results. May take awhile to determine if the solution has worked. Thank you for responding.
  5. Hello, I've been having an issue that has recently started popping up. One of my NVMe cache drives keeps dropping. I have 4 cache pools setup. With the following names: Cache DownloadCache Plexcache (this is the faulty/disappearing one) nvme1n1p1 Systemcache I have four cache pools, because when I attempted to run them in btrfs raid 10 it was horrible. But that isn't the issue here. All four of the cache pools are single NVMe drives none are in any form of raid. They are formatted with XFS. The NVMe drives I am using are XPG_GAMMIX_S50_Lite (https://www.xpg.com/us/xpg/681?tab=specification). I have all four of the drives running on the HBA card AORUS Gen4 AIC Adapter (https://www.gigabyte.com/us/Solid-State-Drive/AORUS-Gen4-AIC-Adaptor/sp#sp). The motherboard I am using is ASRock Rack ROMED8-2T (https://www.asrockrack.com/general/productdetail.asp?Model=ROMED8-2T#Specifications). The slot the HBA is in is set to 4x4x4x4 mode with the speed manually set to PCIe 3.0. Previously I was getting PCIe error with them running at PCIe 4.0, but those errors went away when I forced the speed down to 3.0 (I think this was due to communication errors/signal integrity?). The error I am having is that the cache pool named Plexcache will drop out sometimes. This device is nvme1n1p1. None of the other NVMe drives on the HBA are exhibiting this error/inconsistent behavior. Now luckily I have an elastic cluster that ingests my unraid servers syslog, so I can see the error messages, but I don't know how to solve the issue. I have run a SMART short self-test on the drive and it reports No Errors Logged. The SMART report is attached. The file attached named "Syslog nvme keyword search.csv" contains the syslog but filtering for *nvme*. The file attached named "Syslog 2 days.csv" contains all syslog data from the past two days. If anyone has experienced something like this please let me know. Specs: Motherboard: ROMED8-2T BMC Firmware Version: 1.19.00 BIOS Firmware Version: P3.50 CPU: AMD EPYC 7542 Cores: 32 Threads 64 Base: 2.9 GHz Boost: 3.4 GHz Cache: 128MB L3 Cache Memory Controller: 3200 MHz Memory Channels: 8 PCI Express Revision: 4.0 PCI Express Lanes: 128 Socket SP3 TDP 225W Series: AMD EPYC 7002 CPU Cooler: Noctua NH-U9 TR4-SP3 RAM: Kingston 32GB DDR4 Model: KSM32RD4/32HDR Flash Storage: 4 XPG GAMMIX S50 Lite 1TB M.2 2280 PCIe Gen 4.4 NVMe XPG_GAMMIX_S50_Lite_2L252LQJ58LY XPG_GAMMIX_S50_Lite_2L252LQH8ERH XPG_GAMMIX_S50_Lite_2L25292BJACA XPG_GAMMIX_S50_Lite_2L2529QB66YE 1 970 EVO Plus 1TB Samsung_SSD_970_EVO_Plus_1TB_S59ANJ0N123475B (Unused) Case: Norco RPC-4220 NVME PCIe 4.0 Adapter: AORUS Gen4 AIC Adapter Model: GC-4XM2G4 XPG_GAMMIX_S50_Lite_2L252LQH8ERH-20230506-1424.txt Syslog nvme keyword search.csv Syslog 2 days.csv
  6. sean

    BTRFS Issues

    I don't know why exactly, but BTRFS Raid 10 on 4 NVME 3.0 M.2 SSDs did not work well for me. I noticed consistent slowdowns when using SABNZBD at around 50MB/s to download to my Downloads share which lived on the raid 10 BTRFS pool. These drives are individually capable of multi-gigabyte writes per second. The appdata share also lived on this cache pool. When these downloads were occurring other services including Plex significantly suffered performance degradation. To the point of Plex not responding anymore. All four NVME drives are connected directly to a PCIe x16 carrier card. I used PCIe bifurcation to have all the drives show up properly and each drive had a full x4 lane. Based on my experience I'd highly suggest not using BTRFS Raid 10. I did not test Raid 1. I was considering this with running two cache pools in Raid 1, but at this point I know XFS works and is very performant. I will not have drive failure protection on my cache pool, but at least my services will be stable now. I'll wait for ZFS to become fully supported by Unraid then i'll try using that for the cache pool. (please be soon) On a side note, if you have multiple NVME drives connected to your server through a PCIe card that can support multiple drives and you are using PCIe bifurcation and are seeing PCIE hardware errors. I'd suggest changing your PCIe version/generation that the slot is running at back to 3.0 from 4.0. On my AMD EPYC system PCIe 4.0 was causing multiple hardware correctable errors which I was able to attribute to a poor connection/signal integrity. Once I changed the slot to 3.0 those errors went away. Also, I did experience the same slow downs for the BTRFS Raid 10 cache pool after fixing the signal integrity issue. The signal integrity issue did not cause the overall performance issue. I'd love to hear others experience with BTRFS Raid 10. Have you had the same performance issues? And a large thank you to all the members of the forum for posting about their issues and solutions. Without this I'd have been lost in solving many of these issues. Server specs for reference: Motherboard: ASRockRack ROMED8-2T BIOS: American Megatrends Inc., Version P3.50 CPU: AMD EPYC 7542 32-Core Processor RAM: Kingston 32GB ECC (two dimms 64GB total) NVME Drives: XPG_GAMMIX_S50_Lite 1TB PCIe 4.0 XPG_GAMMIX_S50_Lite 1TB PCIe 4.0 XPG_GAMMIX_S50_Lite 1TB PCIe 4.0 XPG_GAMMIX_S50_Lite 1TB PCIe 4.0 Samsung_SSD_970_EVO_Plus_1TB PCIe 3.0
  7. RAID card died I think. Booted a new usb of unraid 6.3.5 and no drives. Whats weird though is that the bios knows that the hdd's are there. I'm not sure what to make of this. If you agree that the card died what one should I get to replace it. I have a norco rpc-4220 case and I'd need something compatible with that.
  8. I have run it before with them disabled too. When I flashed the bios it reset them to disabled for both and the drives still weren't there. I'm making a 6.3.5 unraid usb drive now just to boot from as a test. I'm going to see if it can see the drives. If it can then i'm going to say that 6.4.1 is not ready. My configuration is pretty normal nothing weird so i'm pretty upset that i'm having issues.
  9. Specs: asus x99 deluxe 32GB RAM norco rpc-4220 m1015 hba flashed to IT mode. (I know its flased because the drives were visable before the update) I upgraded my server from 6.3.5 to 6.4.1 which I knew was a stupid idea but here we are. I was expecting it to go fine but not so lucky. First thing I did was upgrade unraid via the webgui. Then I upgraded the motherboard bios to the latest version from asus's website. I then booted into unraid 6.4.1 and no drives at all from the hba. I then googled and found I needed vt-d and intel virtualization technology enabled. I enabled both and still nothing. I replaced the cmos battery just in case too but still no drives. I'm now at a loss. It working on the previouse version without a hitch so I have no idea what changed. The bios can see all the HDD's just fine but unraid sees only the hba card. I've attatched my syslog.txt and a picture of the device list showing the hba. Any help would be greatly appreciated! syslog.txt tower-diagnostics-20180303-1949.zip