Kaizac

Members
  • Posts

    470
  • Joined

  • Days Won

    2

Everything posted by Kaizac

  1. It should all be 99/100. I just created a new sonarr instance even with umask in the docker template and did an library import. It just imported them quickly and populated the episodes. So Sonarr is working at least. Regarding your mount script failing the first time, it could be a check file which it sees and thus won't continue in the script. It's hard for me to tell when I can't see how your system and folders look when you haven't mounted yet. I would try to reboot the server without the mounting (just no rclone scripts and disable all dockers autostart). And then use the commands I posted about for the mount and merger (adjust them to your folder names) and then see if it gets populated properly and if so, start up Sonarr and see what happens. In Sonarr settings, did you disable the analyze setting by the way?
  2. Unraid version is stable latest: 6.11.5 Sonarr is latest develop from linuxserver
  3. Nothing out of the ordinary in your docker list. The fact the permissions keep resetting is strange. You could just run the mount script once and see what happens. Anyways I run my own mount script but I have --umask 002 and --uid 99 and --gid 100. Might not be the solution for you, but worth a try. I never have the wrong folder permissions. This is my full mount command: rclone mount --allow-other --umask 002 --buffer-size 256M --dir-cache-time 9999h --drive-chunk-size 512M --attr-timeout 1s --poll-interval 1m --drive-pacer-min-sleep 10ms --drive-pacer-burst 500 --log-level INFO --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off --vfs-cache-mode writes --uid 99 --gid 100 tdrive_crypt: /mnt/user/mount_rclone/Tdrive & And the merger command: mergerfs /mnt/user/LocalMedia/Tdrive:/mnt/user/mount_rclone/Tdrive /mnt/user/mount_unionfs/Tdrive -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true
  4. I can only think of 2 possible solutions right now. First I would remove the /downloads and /tv paths in your docker template. You don't need them as everything goes through /gdrive. And also delete the UMASK variable in the docker template. I wonder then what permissions you have in media management within Sonarr. I have set permissions on, chmod 777, chown empty. After you changed that, trying doing the library import again. If that doesn't work just try adding a new show (without auto search) and manually search for an episode and download it. See if it can import the file and process it. If it can, the permissions should be good. When all of that doesn't show any anomalies I would try going to the latest branch in your docker template. See if it works there. Maybe it is an issue of Sonarr and then we can't fix it. But when you have your library imported you can always change back to develop again.
  5. So just to be clear. You do a library import from /gdrive/mount_unionfs/gdrive_media_vfs/tv_shows? And what are you using for your mount script? Is it a merger with a local drive and the rclone mount? When you go into your sonarr docker console and you cd to /gdrive/mount_unionfs/gdrive_media_vfs/tv_shows and then do an ls or lsd. Does it show all the series you expect? Can you go to your tv_shows folder through windows explorer or the mac equivalent on your own computer and play the mkv files from there? I'm wondering if it's just an api ban now. If it plays fine then we can rule that out.
  6. Yeah so they are on the permissions of root and you need nobody/users (which is abc/abc in the docker). So I would first try the tv_shows folder. Go to your terminal and enter: newperms /mnt/user/mount_unionfs/gdrive_media_vfs/tv_shows
  7. Ok that looks good. I saw some other paths in your docker template like /downloads, so I hope you don't use does but only point to /gdrive/xx. Anyways you could do a ls -la in your tv shows folder from a terminal. See what the permissions are and if they are causing problems.
  8. Correct, but with Dropbox you also are required to ask for additional storage. So if you are just starting out you could ask for a good amount of storage and then request more whenever you need it. So it really depends on you whether it is worth the price and hassle of getting the storage.
  9. It's not become Workspace Enterprise Standard which is around 17 euros per month. But what I've found online is that Google stopped offering the unlimited storage with 1 account. You'll need 5 now I think and even then it's only 5TB per user, and maybe they will give you more if you have a good business case. Onedrive and Dropbox are the only alternatives now I think.
  10. Glad it worked! And yes rebuilding those libraries is annoying. Also don't be surprised you get api banned after such big scans. It should still be able to scan the library fine, but playback won't work. You could always start a new library for the 200k and once it's finished you delete the old one.
  11. Yep and then inside the docker like Plex you use /user/gdrive/audio for example.
  12. You're mixing data paths so the host system doesn't see it as 1 drive anymore thus making it seem files have been moved. The ownership changing could be a lidarr issue where you changed its settings to give the files a certain chown. Fix the file paths first. You're using /data and /user mixed. You should only use /user in all your dockers using the mergers folder. It's a common issue with mixing binhex and linuxserver dockers for example. So you need to get into the habit of using the same paths when dockers need to communicate to each other. Right now Lidarr is telling Plex that the files are located at /user but Plex only knows /data. I would suggest to get plex on the /user mapping and then do a rescan.
  13. You're pointing your dockers to your local media folder instead of the merger folder (local + rclone mount).
  14. Glad you got it working, it's been an annoying happening when you've been away for a while and didn't get the information at that time everyone had to do this. Nice that you got it working! Next time, just read back a bit in this topic, the Plex Claim issue had been discussed in the posts right before your first one. Hopefully these kinds of problems will be not necessary anymore in the future. I'm running 1.30.0.6486 with an Intel iGPU, that's working fine with hw transcoding (just checked it for you). But you're using nvidia? If so, that might be the issue indeed. I'm not a fan of using Nvidia for Plex. Too many dependencies and unknowns it seems.
  15. @roflcoopter I'm very interested in your software, so hopefully you are able to explain some messages/errors I'm getting. I'm running a server with an Intel iGPU and used your viseron docker, but also experimented with the amd64-viseron docker. Both fail to recognized the VA-API. OpenCL is available it says. I've passed the /dev/dri through as device both as path and as extra parameter (last method is the one linuxserver.io is using for their Plex dockers). The last method seems to give an actual load on my iGPU, but that could be just coincidence because it's not using the iGPU constantly. Both methods result in object detection successfully. I've changed PUID and GUID to 99/100. I don't think 0/0 is needed, and on Unraid most of the dockers run at 99/100. ls /dev/dri also shows the igpu within the docker. I've gotten object detection hits successfully, which are amazingly accurate. But I can't place the errors I get in the log below (mostly below), but it also shows OpenCL is available, but VA-API isn't. Some things that I've noticed: Playback of the recordings is not possible for me in my web browser. It says the format is not supported. Live view does open a mjepg stream in my browser. Recordings just don't play. The files on my server are working though. Maybe because of the error in the log below about mp4 not being supported? All the recordings are placed within 1 folder with random namings together with a thumbnail of said recording (same name). For me it would be better to change this to date and timestamps with their own folder. I can have 100's of hits on a daily basis, so having them all in one big folder per camera will be difficult to go through. Especially when file dates get corrupted and there is no way to search through dates. If you need any testing from me, please let me know! I think your software has incredible potential. [2023-01-02 09:35:14] [INFO ] [viseron.core] - ------------------------------------------- [2023-01-02 09:35:14] [INFO ] [viseron.core] - Initializing Viseron [2023-01-02 09:35:14] [INFO ] [viseron.components] - Setting up component data_stream [2023-01-02 09:35:14] [INFO ] [viseron.components] - Setup of component data_stream took 0.0 seconds [2023-01-02 09:35:14] [INFO ] [viseron.components] - Setting up component webserver [2023-01-02 09:35:14] [INFO ] [viseron.components] - Setup of component webserver took 0.0 seconds [2023-01-02 09:35:14] [INFO ] [viseron.components] - Setting up component ffmpeg [2023-01-02 09:35:14] [INFO ] [viseron.components] - Setting up component darknet [2023-01-02 09:35:14] [INFO ] [viseron.components] - Setting up component nvr [2023-01-02 09:35:14] [INFO ] [viseron.components] - Setup of component nvr took 0.0 seconds [2023-01-02 09:35:14] [INFO ] [viseron.components] - Setup of component ffmpeg took 0.0 seconds [2023-01-02 09:35:15] [INFO ] [viseron.components] - Setup of component darknet took 0.3 seconds [2023-01-02 09:35:15] [INFO ] [viseron.components] - Setting up domain camera for component ffmpeg with identifier camera_1 [2023-01-02 09:35:15] [INFO ] [viseron.components] - Setting up domain object_detector for component darknet with identifier camera_1 [2023-01-02 09:35:15] [INFO ] [viseron.components] - Setting up domain nvr for component nvr with identifier camera_1 [2023-01-02 09:35:17] [WARNING ] [viseron.components.ffmpeg.stream.camera_1] - Container mp4 does not support pcm_alaw audio codec, using mkv instead. Consider changing extension in your config. [2023-01-02 09:35:17] [INFO ] [viseron.components] - Setup of domain camera for component ffmpeg with identifier camera_1 took 2.0 seconds [2023-01-02 09:35:17] [INFO ] [viseron.components] - Setup of domain object_detector for component darknet with identifier camera_1 took 0.0 seconds [2023-01-02 09:35:17] [INFO ] [viseron.components.nvr.nvr.camera_1] - Motion detector is disabled [2023-01-02 09:35:17] [INFO ] [viseron.components.nvr.nvr.camera_1] - NVR for camera Door initialized [2023-01-02 09:35:17] [INFO ] [viseron.components] - Setup of domain nvr for component nvr with identifier camera_1 took 0.0 seconds [2023-01-02 09:35:17] [INFO ] [viseron.core] - Viseron initialized in 3.1 seconds [s6-init] making user provided files available at /var/run/s6/etc...exited 0. [s6-init] ensuring user provided files have correct perms...exited 0. [fix-attrs.d] applying ownership & permissions fixes... [fix-attrs.d] done. [cont-init.d] executing container initialization scripts... [cont-init.d] 10-adduser: executing... ************************ UID/GID ************************* User uid: 0 User gid: 0 ************************** Done ************************** [cont-init.d] 10-adduser: exited 0. [cont-init.d] 20-gid-video-device: executing... [cont-init.d] 20-gid-video-device: exited 0. [cont-init.d] 30-edgetpu-permission: executing... ************** Setting EdgeTPU permissions *************** Coral Vendor IDs: "1a6e" "18d1" No EdgeTPU USB device was found ************************** Done ************************** [cont-init.d] 30-edgetpu-permission: exited 0. [cont-init.d] 40-set-env-vars: executing... ****** Checking for hardware acceleration platforms ****** OpenCL is available! VA-API cannot be used CUDA cannot be used *********************** Done ***************************** [cont-init.d] 40-set-env-vars: exited 0. [cont-init.d] 50-check-if-rpi: executing... ********** Checking if we are running on an RPi ********** Not running on any supported RPi *********************** Done ***************************** [cont-init.d] 50-check-if-rpi: exited 0. [cont-init.d] 55-check-if-jetson: executing... ****** Checking if we are running on a Jetson Board ****** Not running on any supported Jetson board *********************** Done ***************************** [cont-init.d] 55-check-if-jetson: exited 0. [cont-init.d] 60-ffmpeg-path: executing... ****************** Getting FFmpeg path ******************* FFmpeg path: /home/abc/bin/ffmpeg *********************** Done ***************************** [cont-init.d] 60-ffmpeg-path: exited 0. [cont-init.d] 70-gstreamer-path: executing... ***************** Getting GStreamer path ***************** GStreamer path: /usr/bin/gst-launch-1.0 *********************** Done ***************************** [cont-init.d] 70-gstreamer-path: exited 0. [cont-init.d] done. [services.d] starting services [services.d] done. [ WARN:[email protected]] global /tmp/opencv/modules/core/src/utils/filesystem.cpp (489) getCacheDirectory Using world accessible cache directory. This may be not secure: /var/tmp/ [2023-01-02 09:35:33] [ERROR ] [viseron.components.nvr.nvr.camera_1] - Failed to retrieve result for object_detector, message repeated 4 times
  16. PGS causes a single threaded transcode. So that's often too heavy for the client/server.
  17. Ah good find! I don't use that mount script myself, so never caught it. I just checked but I still have the same folder in tmp though even with the current CA Backup (V3). Are you sure the path changed? What client are you streaming from? I've noticed this problem when I'm using my mibox, and I need to restart the device. My shield Pro never has the issue. Is direct play through your file browser showing the same issue?
  18. I'm trying to remember and find where the CA Backup is mentioned in his scripts. Can you point me to them? AFAIK there is no mention or importance of CA Backup for this functionality?
  19. It can't find the mountcheck file on your gdrive folder. So you probably made some error in the configuration of the mount or upload script.
  20. Google drive is still the best with around 18 USD giving unlimited, IF you can get unlimited. I think they changed it so that new signups only get 5TB and you need multiple users (x 18USD) to grow 5TB every time. Some countries/website versions stills how unlimited for Google, others don't. So it's hard to give a right answer for your situation. You can do the trial and see whether they still offer the unlimited if you would subscribe. Regarding Onedrive you have to read the fine print. You will need 5 users and then they will give you 5x25TB if I understand correctly. Beyond that it will be Sharepoint, and I have no idea about speeds and how Rclone deals with that for streaming. Dropbox was another alternative but it seems like they killed the unlimited storage recently so it's only available for Enterprise?
  21. You have to add the PLEX_CLAIM parameter as variable to your docker template. And if you find that too difficult, delete the docker (not the appdata!!) and then just install the plex docker again from the app store. I asked if anything shows in your admin panel. So just look into the admin panel, do you have any icons that seem to alert something or seem to need attention?
  22. What makes you say that is not the issue you're having? You're giving absolutely no information for us to work with to help you.
  23. What does it show in your admin panel? Is there any error or alert?