• Posts

  • Joined

  • Last visited

  • Days Won


Everything posted by Cessquill

  1. I may have got this *completely* wrong, but my understanding of Unraid Dockers for this situation is this... All variables/paths, etc. are created when you initially install the docker. Your template is a snapshot of what the author specifies at that time (which you can further edit as you wish). If the author edits, adds or removes options further down the line, your settings are not affected when you update. For example, I manually added a transcode folder at some point., and I'm not sure how an update would handle this. If this is completely bad information, someone let me know and I'll delete it. Don't want bad advice in searches.
  2. Yep, that was the first of two commands in the script, the second one is the one I use. I can't quite remember, but I *think* the rem'd out command was the one to use in general Docker circumstances, and the second one the script actually uses is for Unraid.
  3. If you just mean running the Nextcloud's cron tasks periodically, I have the following in the User Scripts plugin set to run every 5 minutes #!/bin/bash #docker exec -u www-data Nextcloud php -f /var/www/html/cron.php docker exec Nextcloud php -f /var/www/html/cron.php exit 0 Disclaimer: this was set a fair while ago, and aside from updating the docker I haven't been diving too deep into Nextcloud, just using it.
  4. This thread is for the linuxserverio version of the Plex docker. You've got the plexinc version there. Also, it looks like you're transcoding directly to your array, which I would have thought would be a nightmare.
  5. I'm not sure whether this fix applies to your situation, but I hope your system is more stable now. I've not had another issue since this thread was created, but there may be different issues with different LSI cards - not sure.
  6. Yes. I have the former (a script that puts the Nvidia card into power-save mode). It's duplicated so that it runs on array start, and then hourly. (from a SpaceInvaderOne video)
  7. Yes, but on large libraries you're talking about hours every backup where dockers are off while millions of files are backed up (I estimate that due to backing up my entire Plex folder, my dockers are down for around 10 days every year). Additionally, the amount of storage these files take to back up. Touch wood, I've not had to restore Plex before, but I've refreshed all metadata before and it's not the end of the world.
  8. Thanks for this - had the same problem when upgrading, although I'm not sure whether it didn't coincide with Windows updates, as I reverted back to 6.11.1 from a Flash backup and still couldn't access the shares. Adding "ntlm auth = Yes" fixed it for me.
  9. Fair enough. In my circumstance, the specific configuration of a proxy host in NPM required the site it was pointing at to be running before it would start. If you're sure that's not the case I'll butt out.
  10. Check what other dockers NPM relies on (specifically, more complex sites configured). When I was running a site (the Jitsi docker, I think), if it wasn't running then NPM would never start. If Jitsi was running, NPM would start fine. My guess is that CA is correctly starting NPM, but NPM is shutting itself down, as something it relies on is not present. A few notes: I had this a few years ago, but no longer needed Jitsi (if it was that), so didn't find a solution. I don't know whether CA Backup/Restore adheres to the docker system's ordering and delay options when restarting containers. It's easy to test - shut down dockers one at a time that NPM is linked to and restart NPM. If it doesn't restart with a specific docker stopped, it's likely that this docker is not (fully) running when NPM is restarted following a backup.
  11. It also left my four dockers in question with an unknown update status. That was fixed with a force update on them.
  12. This happened to just earlier too. I got a command failed next to every update and no "done" button on the modal dialog (which leaves a dead page). Four orphaned images to remove, but it looks like the containers updated OK. Was on the way out so didn't get a chance to log or report.
  13. Three posts above yours. With me, it was making sure no other disk activity on my main AppData drive slowed down anything that Plex was doing. I've currently got a lot of read/write on that drive which is slowing my dockers down (trying to track down the culprit).
  14. Unbalance copies then deletes the source files (similar to how Midnight Commander does). If you're moving a lot of data, there may be a cleanup period at the end where the original files are removed.
  15. Absolutely, yes. Thought of that as I was typing - apologies. Fix Common Problems would still warn against a share across multiple pools though, yes? EDIT: I've now gone and set all dockers to point to /mnt/docker/.appdata... Turns out I had already pointed Plex to /mnt/plex/appdata, so I'd obviously started at some point. Also to note, stopped the docker service and set the default docker location to /mnt/docker/appdata
  16. Yep, this is how I have mine set up... An SSD with a cache pool named "Docker" An SSD with a cache pool named "Plex" An "appdata" User share on the Docker pool. All docker appdata is in here except Plex There is also an "appdata" folder on the Plex drive. All Plex appdata is in here "/mnt/user/appdata/" as the Appdata Share (Source) in Backup & Restore sees the folders on both Docker and Plex All docker templates point to /mnt/user/appdata/<docker folder>, but I guess they could point to /mnt/docker/appdata (no idea) Downsides.. Occasionally, a "PlexMediaServer" folder appears on my docker drive - have to move it to the Plex drive (no problems with service, and happens very rarely) Fix Common Problems plugin issues a warning that appdata files/folders exist on the Plex pool. I've just ignored that thinking there's currently no way around it (unless there is and I don't know)
  17. Thanks - two other disks of the same model had spun up, as had the parity. I'll try swapping them around and see what happens.
  18. Hi - On Saturday, I clicked to Spin Up all drives, and when it got to disk 15, both 15 and 16 moved into Unassigned Devices (marked as Array). At this point I rebooted, the attached diagnostics file was automatically created and Unraid rebooted into a parity check. Fortunately the drives were back in the array. I'm trying to work out whether it's a power supply issue (I'm running it pretty full, although it boots fine), an issue with my LSI backplane, a new issue with the latest kernel or something else. I'm aware of the Ironwolf/LSI combo issue that drops drives off when rebooting and have applied recommended settings to all Ironwolfs. Does anybody have any ideas? Unraid is normally rock solid for months, only rebooting for system updates, so I'm a little worried. Any guidance appreciated. unraid1-diagnostics-20220903-1044.zip
  19. I just did it as per the Plex instructions and had no problems - signed out of all clients. Signed back in locally, went to general settings, clicked Claim Server and it set itself up. Didn't need to curl or change files. It was a bit slow, but it got there. Remote access is fine, etc. From what I understand, I think the Plex servers have been over-run with people resetting their credentials. Therefore, I *suspect* that most of these solutions would work under normal circumstances, and they may have worked for you guys now because their servers have calmed down some.
  20. Hi - came to my machine this morning and could not get to the web interface (constantly loading). Could get to network shares and dockers appeared to be working (with the exception of Plex). I could putty onto the server and ran a diagnostics (attached), but when I tried a reboot it looked like it was working (The system is going down for reboot NOW!), but nothing happened. Tried an orderly shutdown using Supermicro IPMI, but no dice, had to fully power cycle. My server is normally pretty stable (if not a little full), and goes for months without rebooting. I had recently started to get something periodically maxing the CPU for a couple of minutes that I was trying to locate, so over the weekend I'd uninstalled the File Integrity plugin and rebooted. Backups normally take place on a Monday night, but are typically completed by the morning (and the dockers were running). Any ideas? unraid1-diagnostics-20220816-0943.zip
  21. It only takes a couple of minutes, can be done on existing drives while running <6.9 and on new drives while they're formatting. The instructions are only long to walk through every step - it's really pretty straight forward. Heck, if you're on 6.8.x, go through the steps anyhow - you won't break anything. You're good to upgrade then when you want. As I understand it, It's more of a Seagate/Linux issue. Whilst I'd love to test 6.10, I don't have spare hardware.
  22. As I understand it, this issue causes the drives to drop off when they spin down/up whilst the array is running. If you're losing drives on a machine restart, I suspect that may be a different issue.
  23. Are you sure the drives are set correctly? And they are dropping offline during normal running of Unraid (ie, not during a reboot)? I have 8 of the models in my array, and all have been stable since applying these settings to all of them. I occasionally get a drive drop off during a reboot, but I think that's because I'm pushing my PSU too hard (need to cut down on drives)