• Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About comfox

  • Rank


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Same issue here...been using this image for years and today it won't come online. **EDIT** OK, got past this part. Used a symbolic link first, then edited the getting this error SyntaxError: invalid syntax Traceback (most recent call last): File "/app/sickchill/", line 14, in <module> import sickchill.start File "/app/sickchill/sickchill/", line 1, in <module> from .show.indexers import indexer, ShowIndexer File "/app/sickchill/sickchill/show/indexers/", line 1, in <module> from .handler import Sh
  2. Thanks, maybe I will try it this weekend and see. I am using XFS for the cache drive, there is no pool. I have tried having the downloads on the cache drive and the user shares and the experience is the same on both.
  3. I am using this Docker (for many years now) and I have always had one nagging experience with it. When ever a download is active, my raid becomes a crawling snail. If I am watching a video from the raid and SAB is downloading the video will freeze completely for a few seconds. Anyone else experience this? Any way to fix it?
  4. I am trying to migrate from the needo/plex Docker to this one but I can't for the life of me get this Docker to recognize the data in the app date location from the old docker. Is there a way for me migrate that data easily?
  5. Thanks, I will give some of these a try. I have cleared the credential manager so that isn't the issue. And since this issue persists on every machine I can only think that it is an inherent bug in Windows or an unRAID config issue.
  6. From every computer in my home (approx 8) I am prompted for a username and password whenever I try to connect to the shares. All computers are Windows 10 (various builds, but some at the newest) and machines are both WiFi and Physcially connected to the server. Typing in the username \ without a password works but this means my script to map the drives doesn't work. How can I fix this? I have tried running the "new permissions" script from unRaid but it never completes.
  7. Thanks @gacpac. I am in no hurry so I will try your route and see how it goes!
  8. I have just completed adding 4 8GB drives to my array and would like to remove 3 of the old 4TB drives out of the array without losing the data that is said 4TB drives. Is there a way to do this? I have read the page but it doesn't sounds like it will copy all of the data off of 1 drive to another drive and then allow me to remove that drive. The 4 new 8GB drives did replace 4 other 4TB drives which I replaced as upgrades. I simply want to reduce to overall number of drives now and now have all this free space in the array that the new drives
  9. Thanks for the heads up @Squid. I didn't realize there was a new version out. I do not like the new update feature of unRAID. It used to be easy to tell if there was a new version, now I find I never get notified or can see it. I will see if the issues continue with the new version.
  10. I have mcelog installed from Nerd Pack. I have been running it for a while. I can try the memtrest but this is a pretty heavy server running a Win 10 VM that I game with as well as running many dockers including Plex which transcodes. I would think that if I had a stability issue or a memory issue I would have run in to it by now no?
  11. FCP is telling me that I am having a hardware error due to Machine Check Events detected on your server I downloaded the diag and looked at the syslog. I see the MCE's but I can't make out heads nor tails of what it means. Any help? Aug 3 10:21:24 Tower kernel: mce: CPU supports 9 MCE banks Aug 3 10:21:24 Tower kernel: mce: CPU supports 9 MCE banks Aug 3 10:21:24 Tower kernel: mce: [Hardware Error]: Machine check events logged Aug 3 10:21:24 Tower kernel: mce: [Hardware Error]: CPU 0: Machine Check: 0 Bank 4: be00000000800400 Aug 3 10:21:24 Tower kernel: mce: [Har
  12. Thanks for the quick response. I did that and the output said I should try -L. I did xfs_repair -L /dev/md5 and it completed successfully. I then stopped and started the array and the drive was picked back up. Does this mean this drive is dead or should it bee good to go for a while? Should I run a parity check now as well?