DingHo

Members
  • Posts

    82
  • Joined

  • Last visited

Everything posted by DingHo

  1. I recently upgraded to 6.12.6 from 6.9.2. I'm now getting extremely slow write speeds to my encrypted XFS drives, both cache and array are down to ~10MB/s. I can see CPU usage going way up as well. I've attached a diagnostic file. Thank you for any help. scour-diagnostics-20240123-1539.zip
  2. Parity finished with zero errors. Thanks.
  3. @apandey Thank you. I think you are correct regarding the SATA/Power connections. I swapped another array disk to the problem position and the CRC and read errors began occurring on that drive and not the previous one. I then double checked all connectors on the back of the hot swap drive cage, then tried again and it seems to have fixed it. I'll run a non-correcting parity test now to confirm. Thanks again.
  4. My Unraid server has just completed a 5+ month journey across the Pacific ocean on a literal slow boat from China. All disks were removed from the server and packed in pelican cases. I put it all back together and it booted up and the array starts just fine, however I've been getting disk read errors and UDMA CRC errors on Disk4. It's been a long time since the last parity check. I think I should just remove/replace/rebuild Disk4, but just wanted some advice before starting that process. I've attached the diagnostic files. Thank you. scour-diagnostics-20230226-1356.zip
  5. I checked, and the files remaining on the cache are not duplicates. The appdata on disk9 is what the mover successfully moved.
  6. Hello, I'm trying to clear my single xfs cache drive before encrypting it. I've disabled VMs and docker services. I've set all shares to Yes:Cache. After running the mover a few times, there are still 683MB of files remaining, all from the appdata directory. I've enabled mover logging, and attached the diagnostics. I've noticed in the log repeated issues of: Jul 6 14:06:35 Scour move: move_object: /mnt/cache/appdata/binhex-krusader/home/.icons/BLACK-Ice-Numix-FLAT/96/emotes No such file or directory Thanks for any help. scour-diagnostics-20220706-1408.zip
  7. @Shonky Thanks Shonky, best of luck.
  8. @Shonky Thanks for the update. I too attempted rebuilding the docker image, with no effect. I think I was able to chase down a cause for my problem, however not sure it applies to anyone but my particular case, and probably not to unRAID in general. When Plex was running its scheduled tasks, it was getting 'stuck' on some music files in one of my libraries. While stuck it would read like crazy on the cache drive, even though the songs are on the array. I could reproduce the issue several times by setting the scheduled task and the watching it get stuck. My temporary solution was to remove that particular music library, I'll have to investigate further to see what file(s) in particular where causing the issue. Just thought I'd update in case this helps anyone else.
  9. @JorgeB and @itimpi , Thank you both. I'm currently copying data off the bad drive. Can anyone confirm, I can shuffle around the drive order when I make a new config before rebuilding parity?
  10. Thanks @JorgeB Here's my plan moving forward, please help me do a sanity check: 1) Stop Array. New Config. Add the Unassigned Device (previous parity drive) as Disk 11 (encrypted). 2) Copy Disk 2 to Disk 11. 3) Stop Array. Tools, New Config. Remove Disk 2. (Can I put the other disks in any order at this point?) 4) Order New Parity Drive 5) Finish encrypting other disks while I wait. 6) Add new drive and parity sync.
  11. Hello, I'm in the process of encrypting my array, following SpaceInvader One's guide, so I'm currently operating without parity to speed things up. I received the popup notification "Offline uncorrectable is 1" for Disk 2, which I successfully encrypted and copied data to/from without errors yesterday. Today, while copying data with unBALANCE between two other disks I received the above error. SMART shows 1 Current Pending Sector. I then ran an extended SMART test (attached) and nothing appears to have changed. On the Main WebGUI page, Disk 2 still has a green ball and shows no errors. I realize the drive is getting old and I plan to replace it soon, however I just want to get through this encryption process successfully. Should I continue my encryption process of the other disks? If so, is it safe to do a parity sync after completion with this error? Appreciate any guidance from someone more knowledgeable than me. Thank you. scour-smart-20210730-1454.zip
  12. You may need to resort to a VM to get that to work. Best of luck.
  13. Hi @rcrh, it is in fact only a music server. If you search for 'photos' in community applications you'll find plenty of photo servers. Good luck!
  14. Curious if anyone has made any progress on this issue. I'm still encountering it on a fairly regular basis, even after updating to 6.9.2.
  15. @jonp With your permission, I'll take a crack at generating a transcript with Google Speech-toText. I have an 96kb/s 44.1kHz MP3 of the podcast. If you're interested to provide a FLAC, it would improve the accuracy.
  16. Another thing I noticed during my last incident, when I ran 'docker stats', I could see that the netdata container was marked 'unhealthy' I never had netdata running previously when this happened, I just turned it on recently to try and figure this issue out. So I don't think this specific docker is the cause. Also, from the 5 diagnostics top files I have accumulated while this occurs: MiB Mem : 7667.6 total, 117.7 free, 6593.1 used, 956.7 buff/cache MiB Mem : 7667.6 total, 131.4 free, 6260.4 used, 1275.8 buff/cache MiB Mem : 7667.6 total, 121.9 free, 6270.2 used, 1275.5 buff/cache MiB Mem : 7667.6 total, 121.2 free, 6304.1 used, 1242.3 buff/cache MiB Mem : 7667.6 total, 117.0 free, 6156.9 used, 1393.7 buff/cache So similar to @Shonky in terms of RAM related: It doesn't see to point to that. Curious how to figure out what is causing the loop2 read or if there is a workaround to restart a docker if it's marked as unhealthy?
  17. I think I'm having the same issue. I posted previously about it (link to thread with multiple diagnostic files below). Here's what I've found... iowait causes all 4 CPUs to peg at 100%, system becomes mostly unresponsive iotop -a shows large amount of accumulating READS from the cache disk (at >300MB/S), specifically, loop2 restarting a docker container via command line will fix the problem (for example, docker restart plex, or docker restart netdata) I can not figure out a pattern to when this happens. Mover or TRIM is not running. No one watching a plex movie. I'm on 6.8.3, all drives formatted to XFS
  18. Sorry, I misunderstood. If I restart only the plex docker, it goes back to normal operation. I haven't tried disabling/re-enabling the docker service and VM service.
  19. @JorgeB thanks for the reply. Is there anyway to check besides disabling docker/VMs? As this doesn’t occur predictably, and sometimes not for many days, I’d be without many services unless I built another system to take over network duties. thanks again.
  20. Occurred again today. I noticed the cache drive was being read at a very high rate while CPUs were pegged to 100% again. Attached new diags. Any help appreciated. scour-diagnostics-20210503-0959.zip
  21. For unknown reasons, my UnRAID server will suddenly have all 4 CPUs pegged to 100%. The result is the whole house loses internet access as I've got Pi-hole DNS, and Unifi Controller dockers running on this box. It happens about once a week. I first thought it was the Pi-hole docker so I pinned it to a single CPU. Then I thought it was the Plex docker, so I pinned that to 2 CPUs. Neither action has solved the problem. Running top doesn't seem to indicate what is maxing out the CPUs, no process is using over ~20%. I can fix it without rebooting by restarting the plex docker via command line. I've been having this problem for almost a year. I've looked at the diagnostics but can't figure this out. Could anyone please help? scour-diagnostics-20210430-0910.zip
  22. Does that mean the existing container will stay on the final release of forked-daap and cease to be updated or it will just keep the existing name and still get updated from the new project? Thanks, and I appreciate your efforts.
  23. Will this project be switching over to the new name/repository? https://github.com/owntone/owntone-server