CybranNakh

Members
  • Posts

    39
  • Joined

  • Last visited

Everything posted by CybranNakh

  1. So it is the one plugged into the LSI card and that same HDD fails the SMART test. It makes me think it is a software/firmware glitch. Just got new SAS breakout cables.... will try again tonight and report back
  2. I have ordered new SAS to Sata Cables.... I had bought cheap ones.... hopefully the new cables will help. As far as the additional Mac address registration, helium levels, and smart reporting, no progress!
  3. Hello! I just did some upgrades to my server and now there has been some strange changes to my server. I installed a LSI 9207-8i and a CyberPower UPS. There are two problems that have now happened. 1) There is a second mac address registering to the internal IP of my server.... Unraid is 10.0.0.5 and I can access it... but it also shows a second mac address with the same IP. It is only there when the unraid server is on so it has to be something on there. I have tried: stopping all dockers, disabling docker, tried setting a static IP to unraid in settings. 2) Some of the drives connected to the LSI card have some problems it seems. SSDs are showing high CRC error counts and Hard drives, I cannot run SMART tests on. One is showing Failing Helium but the value is still 100 like all the rest. (western digital Reds). I have tried: Re-seat the card, upgraded LSI firmware to 20.00.07.00 which stopped the CRC count from increasing on SSDs. Any help on either problem is appreciated! Thank you! apollo-diagnostics-20201105-1047.zip
  4. Is upgrading to 6.9.0-beta25 all that is needed in order to fix this bug? I see from the change log it says this issue has been fixed. I currently have encrypted xfs on my array and my cache drive but from iotop -oa still shows loop2 writing excessively. I'm assuming I am just missing something obvious here.... I see one of the recommendation by @testdasi was to recreate the img to be docker-xfs.img as a work around.
  5. This helped alot! While the commands did not work for me, I found another comment in the thread discussing the Loop2 error and encrypted xfs for the array and btrfs on the cache. The solution for me was to convert the cache drive to encrypted xfs. So far, the writes and GBs written have returned to normal levels! (This triggered my memory of this cache issue coinciding with converting my array to encrypted xfs.)
  6. I have tried that command. Thank you for your reply! How would I look at the SMART? From googling it looks like I can just download the SMART Report? I have attached it here (removed serial number). SMART Report.txt
  7. Hello everyone! My server has been up for 11 days now... during that time there have been 55 million writes to my cache drive. Now as much as I would like to think my cache is doing its job... these seems rather excessive and the perfect way to kill the drive. How can I tell of this is part of the BTRFS format bug rather than something I have done incorrectly? Should I just try and switch my Cache format? I am running a Plex server which I have read on some of the related forums can cause larger cache writing but not this much to my knowledge...Thanks for any help! Solution: My array was encrypted XFS while the cache was btrfs which others have said makes the Loop2 bug worse. My solution was to convert the cache to encrypted xfs.... for me this has solved the excessive writes back down to normal levels! apollo-diagnostics-20200706-1608.zip
  8. That is true. I'll get rid of it.... I have been meaning to upgrade the cache drive to at 1TB just to have extra headroom.... I dont really fill up the cache as of yet but once I start during VMs, I have a feeling I will need more space! Thanks for all your help!!
  9. It is kinda stupid... but I basically have 2 folders both named Downloads... one on an unsigned device and one on the cache... the one on the cache is not in use really... the unassigned device one is the main one used.
  10. Sorry for the delay! I have been moving! I have attached the new ddiagnostics... I just took the docker offline... ran mover and then restarted docker. I also have now set Systems to Only. apollo-diagnostics-20200514-2046.zip
  11. Thank you so much! This has been driving me nuts since I was so worried about so much use on the hard drive! you are correct! the user share "Downloads" was moved to an unassigned device. I have switched this to cache-prefer as suggested as you are right. As for the Systems folder... I have set this to Cache prefer as you recommend! Hopefully this reduces the number of reads and writes to the array but with the description you gave (and the fact that the docker image lives there.... I am fairly certain you solved my problem! I have then run a mover as well. Thanks again! I will monitor the RW and see if the number continues to grow out of control!
  12. I have attached it! I also checked file activity and it showed basically only couple shows being watched on plex and a few files being opened on the unassigned drive. apollo-diagnostics-20200508-2143.zip
  13. Hello everyone! I am rather new to unraid and to these forums. I have done my best to follow the unbelievable videos of SpaceInvaderOne. But I believe I have made a mistake somewhere down the line. My array shows a single drive as having over 18million read/writes which is slightly less than the 20million ready/rights to the cache. Am I doing something wrong somewhere? My appdata, ISOs, Systems are all set to cache only. I am running dockers and no VMs as of yet. I have attached screenshots that might be helpful... I also installed iotop and ran iotop --only but that did not really tell me much. Anyone have any ideas? I really dont want to kill the drive because of a silly mistake i make somewhere! Thank you!!