• Content Count

  • Joined

  • Last visited

Everything posted by comfox

  1. Same issue here...been using this image for years and today it won't come online. **EDIT** OK, got past this part. Used a symbolic link first, then edited the getting this error SyntaxError: invalid syntax Traceback (most recent call last): File "/app/sickchill/", line 14, in <module> import sickchill.start File "/app/sickchill/sickchill/", line 1, in <module> from .show.indexers import indexer, ShowIndexer File "/app/sickchill/sickchill/show/indexers/", line 1, in <module> from .handler import Sh
  2. Thanks, maybe I will try it this weekend and see. I am using XFS for the cache drive, there is no pool. I have tried having the downloads on the cache drive and the user shares and the experience is the same on both.
  3. I am using this Docker (for many years now) and I have always had one nagging experience with it. When ever a download is active, my raid becomes a crawling snail. If I am watching a video from the raid and SAB is downloading the video will freeze completely for a few seconds. Anyone else experience this? Any way to fix it?
  4. I am trying to migrate from the needo/plex Docker to this one but I can't for the life of me get this Docker to recognize the data in the app date location from the old docker. Is there a way for me migrate that data easily?
  5. Thanks, I will give some of these a try. I have cleared the credential manager so that isn't the issue. And since this issue persists on every machine I can only think that it is an inherent bug in Windows or an unRAID config issue.
  6. From every computer in my home (approx 8) I am prompted for a username and password whenever I try to connect to the shares. All computers are Windows 10 (various builds, but some at the newest) and machines are both WiFi and Physcially connected to the server. Typing in the username \ without a password works but this means my script to map the drives doesn't work. How can I fix this? I have tried running the "new permissions" script from unRaid but it never completes.
  7. Thanks @gacpac. I am in no hurry so I will try your route and see how it goes!
  8. I have just completed adding 4 8GB drives to my array and would like to remove 3 of the old 4TB drives out of the array without losing the data that is said 4TB drives. Is there a way to do this? I have read the page but it doesn't sounds like it will copy all of the data off of 1 drive to another drive and then allow me to remove that drive. The 4 new 8GB drives did replace 4 other 4TB drives which I replaced as upgrades. I simply want to reduce to overall number of drives now and now have all this free space in the array that the new drives
  9. Thanks for the heads up @Squid. I didn't realize there was a new version out. I do not like the new update feature of unRAID. It used to be easy to tell if there was a new version, now I find I never get notified or can see it. I will see if the issues continue with the new version.
  10. I have mcelog installed from Nerd Pack. I have been running it for a while. I can try the memtrest but this is a pretty heavy server running a Win 10 VM that I game with as well as running many dockers including Plex which transcodes. I would think that if I had a stability issue or a memory issue I would have run in to it by now no?
  11. FCP is telling me that I am having a hardware error due to Machine Check Events detected on your server I downloaded the diag and looked at the syslog. I see the MCE's but I can't make out heads nor tails of what it means. Any help? Aug 3 10:21:24 Tower kernel: mce: CPU supports 9 MCE banks Aug 3 10:21:24 Tower kernel: mce: CPU supports 9 MCE banks Aug 3 10:21:24 Tower kernel: mce: [Hardware Error]: Machine check events logged Aug 3 10:21:24 Tower kernel: mce: [Hardware Error]: CPU 0: Machine Check: 0 Bank 4: be00000000800400 Aug 3 10:21:24 Tower kernel: mce: [Har
  12. Thanks for the quick response. I did that and the output said I should try -L. I did xfs_repair -L /dev/md5 and it completed successfully. I then stopped and started the array and the drive was picked back up. Does this mean this drive is dead or should it bee good to go for a while? Should I run a parity check now as well?
  13. Hi there, I just rebooted the array and it came up saying Disk 5 is Unmountable: No Filesystem. I am currently running xfs_repair -v I have attached my tower-diagnostics
  14. Hey All, I have been using this docker for a while now and have this issue since starting but figured I would pose the question here. Whenever I am downloading something via this docker my Windows 10 VM will lag. The entire VM just freezes briefly for say 7 seconds then come back but then will continue to lag for 7 seconds until the download is complete. I use CPU pinning and have both the Docker and VM on different cores. No other dockers do this and I have some pretty intense ones running.
  15. I am, I found it, I didn't like it. I went back to figuring out Dolphin and found the problem. Some how the port got reset back to default which had a conflict with another container. I changed the port on Dolphin and all is well now.
  16. Thanks, I will do the same. Can you suggest a particular docker build?
  17. Was this ever resolved? I am having the same issue now.
  18. I will likely go back to it, just wish I knew what caused this corruption in the first place. Loosing a week to rebuild is a bummer.
  19. Thanks...I took a read through the post. Seems amazing, though looks a bit too complicated for me to setup and maintain. I don't know where to start.
  20. Thanks for the info...will this type of backup cover me in this type of scenario where I had corruption? Would I just rebuild the pool from scratch and then put the snapshot back on to the new pool?
  21. Consider this topic closed. Here is a rundown for anyone that stumbles across this in the future. NOTE: btrfs is still in its infancy and corruption can occur. Recovery tools are not easy to work with and don't do a very good job recovering. Use btrfs at your own risk and backup anything of value daily, if not hourly. For some reason on the weekend my Cache Pool (btrfs) decided to head to the crapper. I do not know why it did, there were no unexpected shutdowns and I didn't really look hard in to the why. I just assume that I am a heavy user and corruptions happene
  22. Nevermind, I figured it out by reading your edit. I changed the slots from 3 to 1 and then the dropdown came back.