Homerr

Members
  • Content Count

    63
  • Joined

  • Last visited

Community Reputation

2 Neutral

About Homerr

  • Rank
    Newbie

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I'm having similar issues to jasonmav and scud133b above with CRASHPLAN_SRV_MAX_MEM, see image below of pop-ups. I tried editing the docker settings for memory to 2048M and 4096M to no avail (I have 16gb ram fwiw). I only get the red X on the webui header now (second image is butted up under the first image with the 3 memory warning boxes). In doing the above I also broke the memory setting and I hope one of you can post a screenshot/fix. The last time I edited the docker memory settings I selected 'Remove' thinking it was to remove the value, not the actual tool. I see how to
  2. Thanks so much johnnie for noting this. I think this was the real culprit all along. In 25 years of computer building I've never had a standard SATA go bad. Everything is back in the array and a parity check showed no errors and it's been running a week now.
  3. Thanks! I just finished consolidating some backup files to free up a 10tb drive and swapped that in as a replacement for Disk 8. It's rebuilding now. I do as you suggested and check it against the files. And then that drive will replace Disk 13.
  4. Both disks have lost+found folders. Disk 8 just has 2 folders with files like 132, 133, 134, etc. with no file extensions. The files are large multi GB, probably movies...? Disk 13 has more recognizable folders and files. Dumb question is - how do I recover each of them from the lost+found trees?
  5. I started the array and the disk now show up with Used and Free space (instead of Unmountable) but are still emulated.
  6. The outputs are long so I put them in text files, attached. Disk 8.txt Disk 13.txt
  7. I let it finish a check of the array while the array was started, it ran all day yesterday and finished without errors. I'm attaching the diagnostics file after that completed. Restarted in Maint. mode and here are the xfs_repair outputs. I reran with no modifier, but did not yet run the -L option. Disk 8: Phase 1 - find and verify superblock... sb root inode value 18446744073709551615 (NULLFSINO) inconsistent with calculated value 128 resetting superblock root inode pointer to 128 sb realtime bitmap inode 18446744073709551615 (NULLFSINO) inconsistent with calcul
  8. Been sick this week, just getting back to this. I bought new SATA cables for all 4 drives running off the mobo including the 2 parity drives. I started the array in Maintenance mode, ran xfs_repair with no modifiers. It said to start the array and then rerun, use -L if that didn't work. Here is the diagnostics file while the array is back up. I have not gone back in to Maintenance mode yet and reran anything. unraid-diagnostics-20191214-2113.zip
  9. oops! unraid-diagnostics-20191208-1555.zip
  10. Diagnostics attached. unraid-syslog-20191208-0001.zip
  11. Disk 8: Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - process newly discovered inodes... Phase 4 - check for duplicate blocks...
  12. I got the new LSI card from Art of Server and installed it. I started in Maintenance mode, clicked on each disks 8 and 13, and ran the xfs repair without -n. It then said it needed to mount the drives to get information. I stopped Maintenance mode and started the array in normal mode. I don't know if this is really what it was asking for. Stopped the array again. Went in to Maintenance mode and re-ran xfs repair. It didn't seem to be definitive about what happened, just 'done'. Stopped Maintenance mode and started array as normal. Disks 8 and 13 still having issu
  13. I've figured out which backplane each drive is on. Disks 13 and 8 were on the same backplane and same breakout cable. I replaced the breakout cable with a different one and also moved the drives to my one unused slot to no avail. Once drives are marked 'Unmountable: No file system' are they then always until a reformat and/or rebuild? Or should they show up 'healthy' if there are no other issues?
  14. Ok, thanks for the direction. I had previously ordered an LSI 9201 16i from ebay seller jiawen2018 in the midst of this and tried it but it was wonky from the start and only running at paltry kb/s speeds. I went back to the SASLP controllers since they had previously worked. But now I just ordered another LSI 9201 16i from Art of Server on ebay and his pics looks slightly different than jaiwen2018's cards, so I wonder if the latter was genuine. I did put in new cables in August when I first attempted to address this. So I'm concerned that one of my backplanes is bad. I'm goin
  15. [Solved] TLDR - A SATA cable went bad on one of the parity drives. I had a previous issue I *just* fixed and now it's back. I had 3 drives drop out of the array per the thread below. I have 2 parity drives so I was able to rebuild two from parity and the third had data loss. What I did: I had a new precleared WD 10tb that I put in as Disk 8 (WD10...48LDZ). I then precleared the replaced former Disk 8 (WD10...W43D) as Disk 14, that's running fine now. Disk 11 was ultimately reformatted and returned to the array after some checks