Djoss

Community Developer
  • Posts

    2356
  • Joined

  • Last visited

  • Days Won

    9

Everything posted by Djoss

  1. Yes, I keep the container image updated with the latest version of CloudBerry Backup.
  2. This issue will be fixed in the next version!
  3. From the control menu, you just need to change the scaling mode to "local". Then restart the container to reset the size of the HandBrake's window to what you configured in container's settings. This should give you the same thing as the previous version.
  4. There is now a control menu to handle that. When you first load the page, or reload the page, the control menu is shown for few seconds before being automatically closed. See the handle on the left side of the screen to open it again.
  5. "Right click" --> "Unselect all" works for me.
  6. What is the issue exactly ? The screenshot you shared tells that your backup size is bigger than 5.5TB. Is this the case?
  7. Ouch! Yes, if you can find a way to (more easily) reproduce that would be useful.
  8. So is it only the amount of files to scan that creates the crash ? How many do you need to have ?
  9. Does you backup have encryption, compression, etc enabled ? You could try to do a test by performing a backup to a non-USB HDD to see if you get the same speed.
  10. For people that reported similar issue, a reboot did fix it. Maybe this is something you can try.
  11. You mean that the MakeMKV does see the drive, but not the inserted disc ?
  12. In fact there are 2 devices for each drive: one /dev/srX and one /dev/srY. Both usually need to be mapped. You can use the container log to see if you have correctly mapped the drives: [cont-init ] 54-check-optical-drive.sh: looking for usable optical drives... [cont-init ] 54-check-optical-drive.sh: found optical drive [/dev/sr0, /dev/sg3], but it is not usable because: [cont-init ] 54-check-optical-drive.sh: --> the host device /dev/sg3 is not exposed to the container. [cont-init ] 54-check-optical-drive.sh: found optical drive [/dev/sr1, /dev/sg4], group 19. [cont-init ] 54-check-optical-drive.sh: WARNING: for best performance, the host device /dev/sr1 needs to be exposed to the container. So in you case, one container should have: /dev/sr0 /dev/sg3 And the other should have: /dev/sr1 /dev/sg4 The new image is already available
  13. Ok I will fix the following problem: [cont-env ] SUP_GROUP_IDS_INTERNAL: stat: can't stat '/dev/sr1': No such file or directory However, it seems that only /dev/sr0 and /dev/sg4 are exposed to the container. /dev/sr1 and /dev/sg3 are not. Note that the content of the field should not have "&&". It should be: --device=/dev/sg4 --device=/dev/sr0 --device=/dev/sg3 --device=/dev/sr1 Once all the devices are properly exposed, I think your problem should be fixed.
  14. MakeMKV has not been started with the correct groups... Could you provide the output of: docker logs MakeMKV docker inspect MakeMKV
  15. Which Linux device(s) did you expose ? Can you run "ls -l" on them and report here ? For example: ls -l /dev/sr1 Also, run: docker exec MakeMKV ps And note the PID of MakeMKV UI. You should have a line that looks like this: 823 app 0:45 {makemkv} /opt/makemkv/bin makemkv -std In this example, the PID is 823. With this PID, provide the output of (replace 823 with the PID ou got previously): docker exec MakeMKV cat /proc/823/status
  16. Do you get the same result with: docker exec MakeMKV su-exec app /opt/makemkv/bin/makemkvcon -r --cache=1 info disc:9999
  17. Can you try to run the following command to see if drive is detected: docker exec MakeMKV /opt/makemkv/bin/makemkvcon -r --cache=1 info disc:9999
  18. A new image is being built to fix this.
  19. You can get the container log by running "docker logs VideoDuplicateFinder". You can run the command after the crash.
  20. Do you have anything useful in the container log when it crashes ?
  21. I was able to reproduce. This is an issue with the app itself. I created a bug report for this: https://github.com/0x90d/videoduplicatefinder/issues/398
  22. All you are discussing should not be a concern for FileBot or any applications working with /mnt/user/. The issue/behaviour you are raising seems to be related to unRAID internals. For example, you mentioned that moving a file from a cache-enabled share to a cache-disabled shared does not produce the expected result. From FileBot's point of view, the file has been moved from one folder to the other. However, under the hood, unRAID did not place the file to the expected disks.
  23. It should be good now, even without the patch./fix