saarg

Community Developer
  • Posts

    5374
  • Joined

  • Last visited

  • Days Won

    8

Everything posted by saarg

  1. What is the permissions on the script and where is it located?
  2. Read the Readme in github for how to set a specific version. You need to find the version string on the plex pass part of the plex forum.
  3. Since you can't use UD+ without UD, wouldn't it be better to install UD also of you install UD+? Then thee wouldn't be any need to use time saying that you need UD also.
  4. Check the first posts in the unraid nvidia plugin thread for how to see if it's working. If jellyfin isn't specifically mentioned look at the emby info.
  5. You are running the container in host mode i guess? If you run it in bridge mode and need to use port 554 on the outside, you can just port map 9983 to 554. If you run in host mode, we can try to allow tvheadend to use a port lower than 1024 with a command. Run the below commands and see if that works. Restart the container after you run the commands. docker exec -it tvheadend bash apk --update add libcap setcap cap_net_bind_service=+epi /usr/bin/tvheadend exit
  6. Are you saying you are installing libvirt yourself? That is for sure going to mess things up. 6.8.0 should have a pretty recent libvirt version installed.
  7. You add that in the extra domain field. If it isn't i the template, then add a variable in the template checking the correct name in the github link in the first post.
  8. I don't have it bookmarked, so no link, but it's somewhere in those threads. Posted by aptalca. Nginx is black arts for me, but nginx can handle multiple things on port 443 at the same time. Try googling reverse proxy openvpn and the first hit should be a guy that did it. Edit:
  9. You can use nginx to proxy the vpn connection. There should be instructions in this thread or the letsencrypt thread for how to do it. Should be posts from @aptalca
  10. It should work as with every other container. Looking at your screenshot, you have changed the PUID and PGID from the standard 99/100. Any reason for that? Also, posting the container log might help to figure it out.
  11. If you run the offical Plex container, please use their support forum. Or if you think this is an unraid issue, open a topic in the general support subforum.
  12. This is support for unraid users. Go to our discourse forum or discord for support on other platforms.
  13. I recently exchanged one of my cache disks from a Samsung EVO 850 500GB to a nvme 1TB Samsung EVO 860. The first issue I noticed was that none of the docker containers worked as it seems that the docker.img file became read only while doing the initial replace (You probably understand better what happened from the log). Should the mounting of docker.img maybe wait until device replacement is finished? Anyway, that is not the main issue I wanted to report, and it was fixed by stopping and starting the docker service. I followed the FAQ entry here: After I replaced the drive I still got drive full messages when copying to a share with cache enabled. When I checked the unraid gui, the size of the cache pool was still saying 750 GB even though I now had two 1TB disks. After a quick check, I saw that the filesystem on the new 1TB drive was the same size as the old 500GB drive. After running btrfs filesystem resize 1:max /mnt/cache the issue was resolved. I can see from the syslog that it does the resize command, but it looks like it's using the old device /dev/sdc1 and after the resize the device is changed to /dev/nvme0n1p1. I know this is 6.7.2 and this might be changed in 6.8, but I'll post it in case it's not. Below is the output of some btrfs commands before and after the resize command. root@Server1:~# btrfs fi usage -T /mnt/cache Overall: Device size: 1.41TiB Device allocated: 931.46GiB Device unallocated: 512.39GiB Device missing: 0.00B Used: 624.15GiB Free (estimated): 408.50GiB (min: 408.50GiB) Data ratio: 2.00 Metadata ratio: 2.00 Global reserve: 130.98MiB (used: 0.00B) Data Metadata System Id Path RAID1 RAID1 RAID1 Unallocated -- -------------- --------- --------- -------- ----------- 1 /dev/nvme0n1p1 463.70GiB 2.00GiB 32.00MiB 465.78GiB 2 /dev/sdb1 463.70GiB 2.00GiB 32.00MiB 512.36GiB -- -------------- --------- --------- -------- ----------- Total 463.70GiB 2.00GiB 32.00MiB 978.14GiB Used 311.39GiB 699.41MiB 96.00KiB root@Server1:~# btrfs filesystem show /mnt/cache/ Label: none uuid: ca04e038-59fa-4d03-a900-74b50150a144 Total devices 2 FS bytes used 312.07GiB devid 1 size 465.76GiB used 465.73GiB path /dev/nvme0n1p1 devid 2 size 978.09GiB used 465.73GiB path /dev/sdb1 root@Server1:~# btrfs filesystem resize 1:max /mnt/cache Resize '/mnt/cache' of '1:max' root@Server1:~# btrfs filesystem show /mnt/cache/ Label: none uuid: ca04e038-59fa-4d03-a900-74b50150a144 Total devices 2 FS bytes used 312.28GiB devid 1 size 931.51GiB used 465.73GiB path /dev/nvme0n1p1 devid 2 size 978.09GiB used 465.73GiB path /dev/sdb1 The command below was ran today as I forgot to run it after the resize. root@Server1:~# btrfs fi usage -T /mnt/cache Overall: Device size: 1.86TiB Device allocated: 959.46GiB Device unallocated: 950.14GiB Device missing: 0.00B Used: 641.83GiB Free (estimated): 632.56GiB (min: 632.56GiB) Data ratio: 2.00 Metadata ratio: 2.00 Global reserve: 140.84MiB (used: 0.00B) Data Metadata System Id Path RAID1 RAID1 RAID1 Unallocated -- -------------- --------- --------- --------- ----------- 1 /dev/nvme0n1p1 477.70GiB 2.00GiB 32.00MiB 451.78GiB 2 /dev/sdb1 477.70GiB 2.00GiB 32.00MiB 498.36GiB -- -------------- --------- --------- --------- ----------- Total 477.70GiB 2.00GiB 32.00MiB 950.14GiB Used 320.21GiB 725.50MiB 112.00KiB server1-diagnostics-20191221-2036.zip
  14. Google is your friend. https://blog.maxmind.com/2019/12/18/significant-changes-to-accessing-and-using-geolite2-databases/
  15. That is normal if you set the version variable to latest. But why did you set your PUID to 999? If you use unraid that should be 99.
  16. I did not see that yesterday. Probably blind. I don't know what we are going to do about it yet, but it seems strange he didn't update the bridge if that is the new version.
  17. It says where in the reddit thread: Linux: /usr/lib/plexmediaserver/Plex Relay
  18. Where does the dev say that booksonic bridge is going forward? There haven't been a commit on github for 2 years, so I find that strange.
  19. It will not, as you don't have the correct name or path to plex relay by looking at the reddit link.
  20. The ssl port is 1443, unless you have changed it. domoticz.sh isn't used in the container.
  21. Most likely different kernel/qemu/libvirt versions.