Jump to content

dgwharrison

Members
  • Content Count

    75
  • Joined

  • Last visited

Community Reputation

1 Neutral

About dgwharrison

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hi @spants, thanks for the pi-hole docker. I'd like to set this up so I can use it with the lets encrypt reverse proxy however I notice that when I set a custom password for key 9 WEBPASSWORD, it doesn't seem to work.. The default 'admin' still works, but not what goes in the field. I can't see anywhere in the UI to set the password so I'm assuming it's in a config file hence you'd have to ssh into the docker and even if you changed it there it wouldn't be persistent with docker image updates. Is there something I should check, or is this a know issue?
  2. Actually, ignore last, docker mapping /tmp to /tmp/plex doesn't work on a reboot because the dir doesn't exist. So I fixed it by adding the following to the /boot/config/go file: # Create /tmp/plex for plex in-RAM transcoding mkdir /tmp/plex And also I changed the docker mapping from /tmp/plex to just /tmp and configured plex to use /tmp/plex in transcoder settings.
  3. For some reason when I saved the docker path mapping of /tmp to /tmp/plex it didn't stick because when I checked it again it was back to /tmp /tmp. So anyway I just edited the docker mapping again to /tmp /tmp/plex and rebooted, leaving autostart on for the first run, and hey presto it was fine. So solution is definitely if you want plex to transcode in ram make sure it's doing it in /tmp/plex.
  4. Hi @itimpiIt's never been a problem before. I'll change it to /tmp/plex for now and later if indeed it is a lack of capacity of RAM issue I'll move it to the SSD - just trying to prolong the life of the SSDs if possible. Thing is though, even after a reboot and plex not running the plugins are all still broken - almost 25GB in available RAM. Do you or @Squid know how to fix them?
  5. Ok, so how do I fix it? And can I not use /tmp for plex transcoding in RAM on UnRAID?
  6. Mmmm..... No, not manually. But, I have my Plex transcode dir set to /tmp, but at the time of logs & screenshots nothing was playing so binhex-plexpas resource usage was little to nothing. Confirmed because I just got the same results on the plugins and main page and there is nothing playing at all because I've stopped the docker. I'm not a linux expert but my understanding is /tmp will write to disk (swap) if it RAM is exhausted right? so some paging but not complete loss of data right?
  7. Hi @Squid, maybe this is one you might be able to answer. Take a look at the screenshots attached. I get a whole whack of php errors on the plugins page and the a couple on the main page. Diagnostics attached. zeus-diagnostics-20190828-0858.zip
  8. HI @johnnie.black, I removed that disk, rebuilt the array, but still get the email. Any other ideas?
  9. Appears to have worked just fine for both the cache and the array disks, content still all there. Thanks @itimpi, I just wanted to check it was safe before I proceeded.
  10. Hi @itimpi, thanks for the detailed answer, this helps my understanding a lot. I just wanted to make sure data on the cache will be treated in the same way as the data on the array disks? i.e. for my shares that I have exclusively on the cache like appdata and domains, will Unraid rebuild those if I preserve all drive assignments?
  11. I've recently replaced a 3TB disk with a 12TB disk in the array, which went fine it was rebuilt successfully. However, there are other disks that are failing and I'd like to remove them. I've removed the content on the drives I want to remove just using rsync and removing the source. I've read the procedure on the wiki regarding removing drives and using the "new config" tool. My understanding is the files on all the disks (aside from the disks being removed) will be fine, I don't need to copy them off (which is good I don't have the capacity to do a full backup). The question I have is, what about the cache? I have a lot of stuff on the cache exclusively, for example appdata, domains, system. Is all that going to be fine? I'm using 6.7 and I can see there is an option for "persevere current assignments" which presumably I want to enable for 'all' right? When I rebuild the array do I need to put each disk back where it was? Specifically I have 9 disks plus parity at the moment, disk8 is the one I want to remove, so do I leave disk 8 blank in the new config or can I move 9 to 8? Not that it matters, it's ok to have a gap I guess, I'm just interested in the structure.
  12. So I tried setting the cache NVMEs to not spin down, I've uninstalled and reinstalled preclear disk, I've checked the file system of each disk in maintenance mode without the -n flag, I'm at a loss for what to do next. I still get the email every day. I figure it probably isn't the disks because I upgraded to 6.7 from 6.6.7 before I replaced disk 7 and it was after the 6.7 upgrade that the message started coming. What should I do next? I've attached a new diagnostics file. zeus-diagnostics-20190521-0023.zip
  13. Thanks Squid, I'll try disabling spin down on the cache. I'm not worried about the disk that won't mount, it's on the way out and why the array was rebuilding. I'll also spin up in maintenance mode and check the file systems.
  14. Hi @Squid, yes it wasn't preclear or if it was uninstalling / reinstalling didn't work. I received the message again today. Diagnostics attached. zeus-diagnostics-20190517-2324.zip