• Posts

  • Joined

  • Last visited


  • Gender
  • Location

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

DingHo's Achievements


Rookie (2/14)



  1. I checked, and the files remaining on the cache are not duplicates. The appdata on disk9 is what the mover successfully moved.
  2. Hello, I'm trying to clear my single xfs cache drive before encrypting it. I've disabled VMs and docker services. I've set all shares to Yes:Cache. After running the mover a few times, there are still 683MB of files remaining, all from the appdata directory. I've enabled mover logging, and attached the diagnostics. I've noticed in the log repeated issues of: Jul 6 14:06:35 Scour move: move_object: /mnt/cache/appdata/binhex-krusader/home/.icons/BLACK-Ice-Numix-FLAT/96/emotes No such file or directory Thanks for any help. scour-diagnostics-20220706-1408.zip
  3. Does anyone backup their luksHeader to a bin file in case of corruption? Is there any scenario where having it would save you from data loss?
  4. @Shonky Thanks Shonky, best of luck.
  5. @Shonky Thanks for the update. I too attempted rebuilding the docker image, with no effect. I think I was able to chase down a cause for my problem, however not sure it applies to anyone but my particular case, and probably not to unRAID in general. When Plex was running its scheduled tasks, it was getting 'stuck' on some music files in one of my libraries. While stuck it would read like crazy on the cache drive, even though the songs are on the array. I could reproduce the issue several times by setting the scheduled task and the watching it get stuck. My temporary solution was to remove that particular music library, I'll have to investigate further to see what file(s) in particular where causing the issue. Just thought I'd update in case this helps anyone else.
  6. @JorgeB and @itimpi , Thank you both. I'm currently copying data off the bad drive. Can anyone confirm, I can shuffle around the drive order when I make a new config before rebuilding parity?
  7. Thanks @JorgeB Here's my plan moving forward, please help me do a sanity check: 1) Stop Array. New Config. Add the Unassigned Device (previous parity drive) as Disk 11 (encrypted). 2) Copy Disk 2 to Disk 11. 3) Stop Array. Tools, New Config. Remove Disk 2. (Can I put the other disks in any order at this point?) 4) Order New Parity Drive 5) Finish encrypting other disks while I wait. 6) Add new drive and parity sync.
  8. Hello, I'm in the process of encrypting my array, following SpaceInvader One's guide, so I'm currently operating without parity to speed things up. I received the popup notification "Offline uncorrectable is 1" for Disk 2, which I successfully encrypted and copied data to/from without errors yesterday. Today, while copying data with unBALANCE between two other disks I received the above error. SMART shows 1 Current Pending Sector. I then ran an extended SMART test (attached) and nothing appears to have changed. On the Main WebGUI page, Disk 2 still has a green ball and shows no errors. I realize the drive is getting old and I plan to replace it soon, however I just want to get through this encryption process successfully. Should I continue my encryption process of the other disks? If so, is it safe to do a parity sync after completion with this error? Appreciate any guidance from someone more knowledgeable than me. Thank you. scour-smart-20210730-1454.zip
  9. You may need to resort to a VM to get that to work. Best of luck.
  10. Hi @rcrh, it is in fact only a music server. If you search for 'photos' in community applications you'll find plenty of photo servers. Good luck!
  11. Curious if anyone has made any progress on this issue. I'm still encountering it on a fairly regular basis, even after updating to 6.9.2.
  12. @jonp With your permission, I'll take a crack at generating a transcript with Google Speech-toText. I have an 96kb/s 44.1kHz MP3 of the podcast. If you're interested to provide a FLAC, it would improve the accuracy.
  13. Another thing I noticed during my last incident, when I ran 'docker stats', I could see that the netdata container was marked 'unhealthy' I never had netdata running previously when this happened, I just turned it on recently to try and figure this issue out. So I don't think this specific docker is the cause. Also, from the 5 diagnostics top files I have accumulated while this occurs: MiB Mem : 7667.6 total, 117.7 free, 6593.1 used, 956.7 buff/cache MiB Mem : 7667.6 total, 131.4 free, 6260.4 used, 1275.8 buff/cache MiB Mem : 7667.6 total, 121.9 free, 6270.2 used, 1275.5 buff/cache MiB Mem : 7667.6 total, 121.2 free, 6304.1 used, 1242.3 buff/cache MiB Mem : 7667.6 total, 117.0 free, 6156.9 used, 1393.7 buff/cache So similar to @Shonky in terms of RAM related: It doesn't see to point to that. Curious how to figure out what is causing the loop2 read or if there is a workaround to restart a docker if it's marked as unhealthy?
  14. I think I'm having the same issue. I posted previously about it (link to thread with multiple diagnostic files below). Here's what I've found... iowait causes all 4 CPUs to peg at 100%, system becomes mostly unresponsive iotop -a shows large amount of accumulating READS from the cache disk (at >300MB/S), specifically, loop2 restarting a docker container via command line will fix the problem (for example, docker restart plex, or docker restart netdata) I can not figure out a pattern to when this happens. Mover or TRIM is not running. No one watching a plex movie. I'm on 6.8.3, all drives formatted to XFS