dgwharrison

Members
  • Posts

    84
  • Joined

  • Last visited

Everything posted by dgwharrison

  1. Hi @binhex, thanks again for all the amazing containers. My Sonarr container seems to have died, i had a look at the logs and i think maybe failed to upgrade a database or something. What’s the safest way of fixing this problem? Just would rather avoid a complete rebuild. Logs: https://pastebin.com/RG6xpQs7
  2. Found the problem. It was because radarr, sonarr and Lidarr were using delugevpn proxy which seems to be broken but at least a specific issue.
  3. Have just updated to 6.9.1, still same problem.
  4. Hi guys, since about the time I updated to 6.9 rc2 my docker containers don’t seem to work together. For example testing downloaders in sonarr or radarr to both sabnzbd and delugevpn fail - I.e they time out. They’re all configured to pass through the server’s static IP. Using the URL sonarr or radarr says is failing from a browser works no problem, I’m able to use the webUI of sabnzbd/deluge with no issues. All dockers are @binhex releases. What should I check to diagnose this problem?
  5. Sorry @itimpi just realised I forgot to export. 🤦‍♂️
  6. Hi, just wondering how I reset the File Integrity plugin? I deleted the hash files however that seems to have procuded unexpected behaviour... (my bad).
  7. Hi @binhex, thanks for another docker. I've only just installed this one. Doesn't seem to have loaded correctly when accessed via lets encrypt, not a massive problem just thought I'd report it.
  8. Hi, thanks for the plugin, just a minor bug I noticed when running the Find Duplicates command, it says it's checking Disk 8, I've actually removed that disk from the array as it was faulty. Screenshot:
  9. Hi @binhex, I noticed plex has a new transcoder that supports NVIDIA transcoding on Linux with zero copy and all sorts of other nice stuff. Before I rush out and buy a supported card, I was just wondering is it possible to use this new feature and the card when running Plex from a docker? If it were a VM it's a pretty straight forward matter to just pass through the card, is that possible with a docker? Where does one load the drivers? -- Edit Disregard. Found a video from @SpaceInvaderOne that shows how to do it!
  10. Hi @spants, thanks for the pi-hole docker. I'd like to set this up so I can use it with the lets encrypt reverse proxy however I notice that when I set a custom password for key 9 WEBPASSWORD, it doesn't seem to work.. The default 'admin' still works, but not what goes in the field. I can't see anywhere in the UI to set the password so I'm assuming it's in a config file hence you'd have to ssh into the docker and even if you changed it there it wouldn't be persistent with docker image updates. Is there something I should check, or is this a know issue?
  11. Actually, ignore last, docker mapping /tmp to /tmp/plex doesn't work on a reboot because the dir doesn't exist. So I fixed it by adding the following to the /boot/config/go file: # Create /tmp/plex for plex in-RAM transcoding mkdir /tmp/plex And also I changed the docker mapping from /tmp/plex to just /tmp and configured plex to use /tmp/plex in transcoder settings.
  12. For some reason when I saved the docker path mapping of /tmp to /tmp/plex it didn't stick because when I checked it again it was back to /tmp /tmp. So anyway I just edited the docker mapping again to /tmp /tmp/plex and rebooted, leaving autostart on for the first run, and hey presto it was fine. So solution is definitely if you want plex to transcode in ram make sure it's doing it in /tmp/plex.
  13. Hi @itimpiIt's never been a problem before. I'll change it to /tmp/plex for now and later if indeed it is a lack of capacity of RAM issue I'll move it to the SSD - just trying to prolong the life of the SSDs if possible. Thing is though, even after a reboot and plex not running the plugins are all still broken - almost 25GB in available RAM. Do you or @Squid know how to fix them?
  14. Ok, so how do I fix it? And can I not use /tmp for plex transcoding in RAM on UnRAID?
  15. Mmmm..... No, not manually. But, I have my Plex transcode dir set to /tmp, but at the time of logs & screenshots nothing was playing so binhex-plexpas resource usage was little to nothing. Confirmed because I just got the same results on the plugins and main page and there is nothing playing at all because I've stopped the docker. I'm not a linux expert but my understanding is /tmp will write to disk (swap) if it RAM is exhausted right? so some paging but not complete loss of data right?
  16. Hi @Squid, maybe this is one you might be able to answer. Take a look at the screenshots attached. I get a whole whack of php errors on the plugins page and the a couple on the main page. Diagnostics attached. zeus-diagnostics-20190828-0858.zip
  17. HI @johnnie.black, I removed that disk, rebuilt the array, but still get the email. Any other ideas?
  18. Appears to have worked just fine for both the cache and the array disks, content still all there. Thanks @itimpi, I just wanted to check it was safe before I proceeded.
  19. Hi @itimpi, thanks for the detailed answer, this helps my understanding a lot. I just wanted to make sure data on the cache will be treated in the same way as the data on the array disks? i.e. for my shares that I have exclusively on the cache like appdata and domains, will Unraid rebuild those if I preserve all drive assignments?
  20. I've recently replaced a 3TB disk with a 12TB disk in the array, which went fine it was rebuilt successfully. However, there are other disks that are failing and I'd like to remove them. I've removed the content on the drives I want to remove just using rsync and removing the source. I've read the procedure on the wiki regarding removing drives and using the "new config" tool. My understanding is the files on all the disks (aside from the disks being removed) will be fine, I don't need to copy them off (which is good I don't have the capacity to do a full backup). The question I have is, what about the cache? I have a lot of stuff on the cache exclusively, for example appdata, domains, system. Is all that going to be fine? I'm using 6.7 and I can see there is an option for "persevere current assignments" which presumably I want to enable for 'all' right? When I rebuild the array do I need to put each disk back where it was? Specifically I have 9 disks plus parity at the moment, disk8 is the one I want to remove, so do I leave disk 8 blank in the new config or can I move 9 to 8? Not that it matters, it's ok to have a gap I guess, I'm just interested in the structure.
  21. So I tried setting the cache NVMEs to not spin down, I've uninstalled and reinstalled preclear disk, I've checked the file system of each disk in maintenance mode without the -n flag, I'm at a loss for what to do next. I still get the email every day. I figure it probably isn't the disks because I upgraded to 6.7 from 6.6.7 before I replaced disk 7 and it was after the 6.7 upgrade that the message started coming. What should I do next? I've attached a new diagnostics file. zeus-diagnostics-20190521-0023.zip
  22. Thanks Squid, I'll try disabling spin down on the cache. I'm not worried about the disk that won't mount, it's on the way out and why the array was rebuilding. I'll also spin up in maintenance mode and check the file systems.
  23. Hi @Squid, yes it wasn't preclear or if it was uninstalling / reinstalling didn't work. I received the message again today. Diagnostics attached. zeus-diagnostics-20190517-2324.zip
  24. I started getting this email when I updated to 6.7. However, the contents of my email is: HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device All of my drives are correctly identified. So I guess it might be what @Squid said about preclear disk plugin, I've removed it and reinstalled it, we'll see if the same email comes tomorrow!