BestITGuys

Members
  • Posts

    13
  • Joined

  • Last visited

Everything posted by BestITGuys

  1. Sorry for asking almost the same question as the OP, but I just wanted to clarify something... I'm trying to do the same thing (replace parity with a larger drive, and once it's done rebuilding, add the original parity drive to the array). I'm not using the same port, and both drives will always be connected at the same time. So, when I get to Step 5, and change Parity1 drive to the larger one, I get a warning that it's the wrong drive (see attached pic). It looks like it'll let me the array anyway, but that warning got me worried that I might trash my array by doing it this way. Can you guys please let me know if that's exactly what I'm supposed to be doing?
  2. I'm trying to update from 6.9.2 to 6.10.3, but there seems no way to do it. It also shows my current status as "unknown", and if I change the branch from stable to next, the whole row disappears. I'm attaching the screenshots... I've seen in other threads that their Plugins were also missing the "Check Updates" button, but in my case, it's there. So, what can I do to fix this? Thanks
  3. Search is broken since the last update. Getting the following error in the log: File "/config/data/qBittorrent/nova3/nova2.py", line 36, in <module> import urllib.parse ImportError: No module named parse
  4. That makes perfect sense. I didn't realize those variables could be used to replace the /watch and /output mappings
  5. Just a suggestion -- maybe you can add an option (or change the existing setup) so that instead of /watch and /output being mapped as volumes, they could be subfolders of a common volume, something like /media. This would make moving large files from the watch folder to output much faster You could map a new /media volume (or just use the /storage volume and point it to the media folder), and then the names of watch and output folders would be docker variables instead of paths.
  6. So, I've had a couple of days to think about this, and I think I came up with a relatively easy way to implement this (or at least I hope so). Couldn't you just add an extra parameter that could be passed to the URL that tells it to start the container and launch the webUI (if one is defined)? So, if the current URL to edit a Docker config is something like this (using qBittorrent as an example): http://192.168.1.100/Docker/UpdateContainer?xmlTemplate=edit:/boot/config/plugins/dockerMan/templates-user/my-binhex-qbittorrentvpn.xml The new one could be something like this: http://192.168.1.100/Docker/UpdateContainer?startUI=true&xmlTemplate=/boot/config/plugins/dockerMan/templates-user/my-binhex-qbittorrentvpn.xml And all that really has to change on the back end is that the script that processes the URLs just needs to parse for that one extra parameter, and if it's present, it'll start the container and launch the webUI instead of opening the Docker edit config page.
  7. It's been a while since this discussion, but it's definitely an issue with jDownloader's 7zip library -- it can't handle rar5 archives. But the underlying library has been updated, and if you manually copy it into the docker, it stops having issues with rar5 archives. I've already tested this on my end, but I'm hoping that @Djoss could update this docker with the updated library. In the meantime, here's what you need to do to fix it manually: 1. Download the updated library from here 2. Extract the contents of the /libs folder. There should be 2 files there -- sevenzipjbinding.jar and sevenzipjbinding-Linux-amd64.jar 3. The first file is OK, but the second one needs to be renamed to sevenzipjbindingLinux.jar 4. Copy both files into whatever share you have mapped to jDownlader's /output 5. open the docker console, and type in the following commands (which rename the old lib files, copy the new ones, and set up the correct owner mv /config/libs/sevenzipjbinding.jar /config/libs/sevenzipjbinding.jar.old mv /config/libs/sevenzipjbindingLinux.jar /config/libs/sevenzipjbindingLinux.jar.old mv /output/sevenzipjbinding* /config/libs/ chmod app:users /config/libs/sevenzipjbindingLinux*
  8. This is actually a folow-up to this request, but since it appears that can't be done, I thought this could be useful... Basically if you have a bunch of related utility dockers (like qbittorrent, jdownloader2, Krusader, and maybe MKVToolnix), you would usually start all of them together. It would be nice if you could link several dockers into a group that could be started/stopped together. And as an added bonus, if those dockers have a webUI, open all the webUIs after they are started
  9. That makes sense, and at least having the docker auto start if the "open webUI" option is selected would be nice. But I'm also not ready to give up on the idea of just listening for a URL request and starting the docker from that. Maybe I'm not understanding how the URL requests are processed, but I thought that there is some kind of a web server (prolly nginx) always listening for URL requests. And all the dockers have their port bindings mapped, so if a docker is running, and it has a webUI, the web server forwards that request to the docker on that port. If the above is correct, then couldn't the webserver check if that docker is running, and if not, start it?
  10. Right now, if a docker is stopped, then (obviously) if you try to pull up the URL of the webUI, it will time out. But I'm proposing that instead, the corresponding Docker should be started, and then the URL will open to the webUI. This will be very useful if you have a bunch of small Dockers that you may not use often, but don't want to leave running all the time (thinks like download managers, bittorrent clients, video transcoders, etc). So, assuming that the user bookmarks the URLs for the dockers, they can be auto-started, instead of having to log into Unraid and starting the Docker manually. I'm hoping that this could be implemented with only minor modifications to the current way that the WebUI requests are processed