• Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About zyv

  • Rank
  1. got directed to this thread by @JorgeB - thx mate. Same issue here with an Adaptec 6805H HBA. 6.9.2 not working with that failure in the log: pm80xx0:: pm8001_pci_probe 1107:chip_init failed [ret: -16] Reverted back to 6.8.3 right now...
  2. ok, found the issue myself....driver not working for it. Log says: pm80xx0:: pm8001_pci_probe 1107:chip_init failed [ret: -16] ...so have to change hardware, or wait for a new driver or roll back
  3. hey, I have the same issue with my Adaptec 6805H HBA. Works flawlessly in 6.8.3, but if I update to 6.9.2, all drives connected to it, are gone (which is basically my hole array). The controller is still recognized - see attached file. If I revert back, the drives are there again. Under /etc/modprobe.d/ I have the mpt3sas.conf, which already contains the max_queue_depth=10000. Any other idea? :(
  4. Ok, I also did a normal xfs_repair -v. Didn't see anything he really did, but I could start the array in a normal way and everything seems to be there and working fine. So, since that thing is finally solved, has anybody ideas on how to solve the S3 sleep issue? Thx in advance.
  5. That's what it said - I can't see an issue / failure - am I missing something? Phase 1 - find and verify superblock... - block cache size set to 698544 entries Phase 2 - using internal log - zero log... zero_log: head block 30438 tail block 30438 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno
  6. There we go - new diagnostics server-diagnostics-20200320-1826.zip
  7. So, finished, but you were definetly right - still says unmountable. If I try it manually, there's the typical "wrong fs, bad superblock...." message. Question is now: how to go on? Just normal xfs_repair or something special? Thx for your willingness to help already EDIT: Mar 18 21:59:21 Server emhttpd: Spinning up all drives... Mar 18 21:59:21 Server kernel: mdcmd (63): spinup 0 Mar 18 21:59:21 Server kernel: mdcmd (64): spinup 1 Mar 18 21:59:21 Server kernel: mdcmd (65): spinup 2 Mar 18 21:59:21 Server kernel: mdcmd (66): spinup 4 Mar 18 21:59:21 Ser
  8. Oh well you are probably right - it just did say it already. I just didn't realize it. (probably it was late already yesterday). It is completed within the next 2 hours. Should I cancel it now? Or just wait for it to finish? (without any result...)
  9. Hi there, I am still pretty new to unraid (using 6.3.8), but got everything pretty much setup and running. I got a kind of new setup server with the following hardware: AMD Ryzen 3 2200G MSI B450-A Pro Max 16GB HyperX Fury DDR4-2666 Intel EXPI9301CTBLK PCIe x1 LAN Adapter 400W be quiet! System Power 9 CM Adaptec SAS HBA (6805H) + a bunch of drives, (in total 39TB mix of different 3TB/4TB/6TB disks and a 8TB parity disk, all 8 of them being connected to the Adaptec HBA) + a ssd cache + a additonal M.2 ssd which is not included in the arra