niXta-

Members
  • Posts

    21
  • Joined

  • Last visited

Everything posted by niXta-

  1. rar2fs is linked to libunrar.so.6.1.7. Running it gives: rar2fs: error while loading shared libraries: libunrar.so.6.1.7: cannot open shared object file: No such file or directory. Either it needs to be compiled to libunrar.so or you need to: ln -s /usr/lib64/libunrar.so /usr/lib64/libunrar.so.6.1.7 Would be nice to have it all set when installing from NerdTools. Thanks for all your hard work btw. Is there a way to send a 🍺?
  2. Finally got around to this. Had to redo the slackbuild since they changed the source. Updated rar2fs to 1.28.0 and unrar also to 5.8.5 Thanks again, the new releases of rar2fs fixes many bugs and memory leaks. Can we get an update (1.29.4), pretty please?
  3. I also noticed this yesterday, it should either be with single brackets or double brackets. # Check CA Appdata plugin not backing up or restoring if [[ -f "/tmp/ca.backup2/tempFiles/backupInProgress" ]] || [[ -f "/tmp/ca.backup2/tempFiles/restoreInProgress" ]]; then @DZMM Hope you didn't chug it all at once
  4. Thanks, moved in the web interface and hit no 750GB limit although google's readme indicated that. Yes, SA's is setup, group created, group added to the team/shared drive, SA json generated. I edited my old rclone gdrive_media_vfs mount, removed secrets, linked SA json and added team drive. Then I used the same crypt mount as before. Seems to work as expected, haven't tried to hit 750GB yet to check for sure though. Do you use separate rclone gdrive mounts for stream and upload? Are there any benefits of doing so?
  5. Amazing work on the scripts! It's been ages since I touch my mount scripts. I have never used team/shared drives nor service accounts. I've had a look at the "new" scripts and wonder what I need to do to get all new yummy features. I've created a shared drive and added the service account group to it. I already have a folder in "My Drive" with plenty of TB in it. Do I need to move it to the shared drive? Can I just move it from the drive.google.com interface? Does it only move 750GB/day as it says in the readme? The config should not be configured for service accounts, the upload script handles that, right? The only change to the conf should be team/shared drive "yes"?
  6. Agree, big update as always from hasse69. Enjoy your beer dmacias.
  7. rar2fs has been updated to v1.28.0. Would be nice to get it in Nerd Tools since it has a big list of fixes, thanks! https://github.com/hasse69/rar2fs/blob/master/ChangeLog
  8. Neat workaround! I tried to edit the docker templates but it kept restoring the network. It would be easy to provide a way to not print a manual network, like: "Network type: manual" and the get a textbox to put in your network command. Anyway, thanks for the solution!
  9. It worked until I updated the vpn container. To have the containers connect again you need to restart them. Then this happens: Pulling image: linuxserver/bazarr:latest IMAGE ID [latest]: Pulling from linuxserver/bazarr. Status: Image is up to date for linuxserver/bazarr:latest TOTAL DATA PULLED: 0 B Removing container: bazarr Successfully removed container 'bazarr' Command: root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker create --name='bazarr' --net='none' --log-opt max-size='50m' --log-opt max-file='1' -e TZ="Europe/Berlin" -e HOST_OS="Unraid" -e 'PUID'='99' -e 'PGID'='100' -v '/mnt/user/media/':'/media':'rw,slave' -v '':'/movies':'rw' -v '':'/tv':'rw' -v '/mnt/user/appdata/bazarr':'/config':'rw' --network='container:vpn' 'linuxserver/bazarr' Error response from daemon: Container cannot be connected to network endpoints: container:vpn, none The command failed. root@plex-server:~# docker network ls NETWORK ID NAME DRIVER SCOPE 08d7b7cb786e br0 macvlan local b1378c44dfa6 bridge bridge local e9baf8b456d9 host host local 299acc6193c1 none null local 5515d3dc02cd plex-network bridge local 66e6c10f20eb reverse-proxy-network bridge local
  10. Hi all! I have been able to run a VPN container called 'vpn' and connect all my other containers through it with these settings: Extra Parameters: --network='container:vpn' Network Type: None This has worked since I started to use unRaid 6 months ago or so. Version: 6.7.3-rc4 works fine but 6.8.0 rc5 gives an error: Has the docker version changed? Any ideas how to get this running again?
  11. Did you have time to take a look?
  12. By "locked" I mean showing up as "D" in top and impossible to kill without REISUB/pull cable.
  13. I'd rather not have them permanently here. Is there a specific reason you want them attached? It's literally just two clicks away and contains the whole diagnostic zip.
  14. Actually, scratch that, it was up after an unknown time, I hadn't enabled autostart on some dockers witch I was queering, so I thought It was still in the booting sequence. I waited about 5 min before leaving it and it seemed like it was stuck. Now when I'm back, I can see that it has been running. Any reboot atm. is very snappy and works fine.
  15. Hi! I’m running latest nvidia-version unraid (trial) on my z370 add to it has been running well for a few days. Today I restarted it but it got stuck at starting samba and isn’t up yet an hour later. see attached photo of the console. Anyone know what could cause this?
  16. I've just downgraded to Nvidia 6.6.7 and have the same (PMS) error: "WARN Failed to find encoder 'h264_qsv'" Do you have to do anything special when using the Nvidia build and passing the iGPU for containers?
  17. Hi all, I can't seem to get quick sync to work. I have activated iGPU in my bios I have added modprobe 915 in my go file ls /dev/dri show: by-path/ card0 card1 renderD128 renderD129 I have added --device=/dev/dri to my extra settings I have enabled hw encoding in plex settings I have plex pass Still hw wont show up in my plex dashboard. I have no problem getting it to transcode with nvenc from my GTX1660. unRaid Nvidia 6.7.2 i5-8400 32GB RAM Gigabyte Z370P D3, latest BIOS Any ideas?