Maz

Members
  • Posts

    11
  • Joined

Everything posted by Maz

  1. My concern is because ZFS does not have the same flexibility as XFS for Mixed drive size and pool expansion. Yes of course ZFS much more robust, but I think I possibly speak for many core users who purchased Unraid for the same reason which is that We just want XFS or a file system that allows easy expansion of drive pools and the ability to mix and match with dual parity and nothing more. I 'm hoping the original focus that made Unraid a better option for many, wont be forgotten while jumping on the bandwagon in supporting ZFS. Again this is just my humble feedback for what its worth.... And, What happens if the Linux Kernel does in fact drop support for XFS??? are the core users of Unraid left out because XFS becomes depreciated in Linux? I only bring this to the attention of the Unraid team in hope it can plan ahead with a clear path to support this functionality at its core, for those users who care about the conveniences of XFS, and nothing more. Thats what I enjoy most about using Unraid vs. the others. As a home user, I've been using it for years with XFS and have been content for what it is.
  2. Im a home-user and enjoy the flexibility of using XFS with mixed drives and expanded drive pools which was the main attraction to Unraid in the first place. I simply have no interest in switching over to ZFS. If XFS is removed future builds, i will no longer have interest. A question For the Unraid dev team, What are the future intentions here regarding future your roadmap in supporting XFS???
  3. Thanks- .. so, I changed the Ethernet rx/tx buffer sizes from 512 back to 256 (using the tips and tweaks plugin) which is default, and enabled flow control. Not sure exactly why, but so far it’s seems to have resolved the dropped connection issue. I will continue to monitor. My amd system ram speed is set to 2400mhz with memory profile turned to off- which is the slowest speed allowed. not seeing any relation to the connection drops due to ram speed or improper motherboard settings. Still a little bit confused as to why, but it seems stable after above Eth buffer size changes w/ flow control enabled.
  4. Please advise. My Unraid server has been constantly dropping connection at random for months and is getting worse. each time requires hard shutdown of the array. reboot lasts a while then sudden disconnects. Syslog & Diagnostics attached. syslog.txt unraid1-diagnostics-20230330-1453.zip
  5. Same problem with the latest my servers plugin 2022.09.06.2108 - Failed to sync flash backup.
  6. Thanks for the feedback - I went ahead and ordered a fan to permanently mount onto the HBA controller heat-sink. (Noctua NF-A4X10) I first tested by temporarily hovering a fan over the heat-sink of the HBA, and it made a big difference temperature wise. My Newly Licensed Unraid build is in a tower configuration, so it likely does lack the same cooling as a server case where these cards are generally intended for. Between moving the SAS card to a different PCIe slot and better cooling, I shall see it this eliminates the dropping of disks from my array. Once system is stable, My next concern is loading the HBA card up with 8 drives (8i) ... Wondering if the LSI card produces more heat with 8 drives connected vs. only 1 ... ... thx again.
  7. Thanks for the feedback. I have 2 PCIe slots, so I moved the H220 (HBA-9207) card to the secondary PCIe slot and re-seated cables. Its re-building the parity again now... we shall see if this makes any difference. The Primary PCIe slot is generally intended for graphic cards so not sure if that can cause any issues having the HBA card installed there.... My secondary PCIe slot is X4 vs X16 so less bandwidth. On a second note, my H220 SAS controller card heat-sink it very hot to the touch. I know these things are Known to run hot, but I am worried the heat might also be a factor. Is it common practice to install fans on these controller cards or to consider re-applying heat-sink thermal paste? I have good airflow in the case so its not that...
  8. disk 5 keeps going Missing (Disabled) from my Array. Cannot identify if this is an Issue with my HBA h220 controller card, SATA cable or Bad disk? - smart reports show no errors on the disk. rebuilt array 3 times and the disk keeps eventually going offline within a few days time. In my attached logs, you will see that the SAS controller suddenly goes "non operational" followed by a write error. Not sure how to properly diagnose. Appreciate any guidance received. Thank you much. unraid2-diagnostics-20190503-1000.zip
  9. Thanks, your feedback was much appreciated- I've since added an SSD cache drive to my configuration and also switched the default file system for the Data drives back to XFS.... My array with valid parity drive starts much quicker now... more like in under 10 seconds where as before it was taking minutes. I am very impressed with UNRAID. Will likely purchase more than one license. That being said, not sure why it was taking 3 1/2 minutes before since it was not the first time the array was started - all that changed for me was adding an SSD cache drive, changing back to XFS for data drives and updating to 6.7.0RC6....... Starts right up now... all good!!!
  10. Hi, Need advise please- few questions- I'm new here so please go easy on me.... New Install. I did a fresh UNRAID install with trial key, built an array with 2 data drives and 1 parity drive and created some shares. No dockers containers and only 3 plugins. Q1- when I go to start the array, it says "starting Services" and it takes approximately 4 to 5 minutes time before it switches over to "array started" Is it normal for the array to take 5 minutes to start? I only have 2 data drives. and 5 shares so far... Will it take longer to start if I have 10 drives instead of 2? My Parity drive is an 8TB WD RED and 2 data drives are each 4TB... no errors reported on any drives once the array finally starts. I am using version 6.7.0RC5 since I am running on an AMD RYZEN 1700 platform which seems stable after disabling C6 states and adding the following "rcu_nocbs=0-15" command in the SYSLINUX configuration: kernel /bzimage append initrd=/bzroot rcu_nocbs=0-15 And kernel /bzimage append initrd=/bzroot,/bzroot-gui rcu_nocbs=0-15 I am left with an UN-easy feeling being that the services takes 5 minutes to start, and am worried in the future it will hang and not start at all. Before purchasing a license I need to feel confidant this is not going to become something to always worry about moving forwards over the coming years... appreciate the feedback.. Diagnostic log attached. thank you. appreciate the advise.....by The way, Stopping services seems to take around a minute or less to complete. Q2- I also am confused since adding 2nd disk to array, having the default disk file system format set to BTRFS under disk settings, but after clearing the drive it shows the disk defaulting to "auto" ... I seem to be forced to switch it to BTRFS which obviously requests to format the drive once the array is once again started.... is that normal? Thank you!! unraid-diagnostics-20190326-1205.zip