purplechris

Members
  • Posts

    30
  • Joined

  • Last visited

Everything posted by purplechris

  1. Hi My "Local" NAS is Unraid, as is my "Remote" After my Backup UserScript runs on Local, offloading its files to Remote. I would like it to trigger a userscript on the remote unraid machine, would will do the following. Run Mover, Offload Files off-site, shutdown. I have everything working, although for some reason I cannot run the userscript on the remove server, from the local server. I am currently trying the following command ssh root@REMOTEIP 'bash -s < /boot/config/plugins/user.scripts/scripts/test' It connects fine as this shows in the log on remote, but the user script on the remote machine is not triggered. The parth of the user script is from the userscripts plugin on the remote server. Once the command has been figured out, I would also like to know how to run that without having to enter a password. Any help would be very much appreciated
  2. Thanks, I did spot that i have not loaded the main tab in 24 hours, spindown is set to one hour I’ve also moved the drive off my lsi hba direct to sata and it’s still spun up guess something else is keeping the drive online. It’s only a backup share that’s used once a month and no files are open so I’m out of ideas
  3. Apologies if this has already been answered. but with this plugin installed my array zfs disk does not spin down
  4. Similar for me, just updated node stuck at Bad parameter when trying to start. I had to remove --runtime=nvidia from my docker config for the node to start, but then it will not transcode. Error shows as docker: Error response from daemon: OCI runtime create failed: container_linux.go:367: starting container process caused: process_linux.go:495: container init caused: Running hook #1:: error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: initialization error: open failed: /proc/sys/kernel/overflowuid: permission denied: unknown. Resolved by going back to nvidia 510.54 driver, seems to like my p2000's
  5. Having an issue where the usb is detected but just sits with found and doesnt progress, not sure if this log is enough to address my issue [fix-attrs.d] applying ownership & permissions fixes... [fix-attrs.d] done. [cont-init.d] executing container initialization scripts... [cont-init.d] done. [services.d] starting services [services.d] done. [2021-11-24 18:32:51] frigate.app INFO : Starting Frigate (0.9.4-26ae608) Starting migrations [2021-11-24 18:32:51] peewee_migrate INFO : Starting migrations There is nothing to migrate [2021-11-24 18:32:51] peewee_migrate INFO : There is nothing to migrate [2021-11-24 18:32:51] frigate.mqtt INFO : MQTT connected [2021-11-24 18:32:51] frigate.app INFO : Output process started: 214 [2021-11-24 18:32:51] frigate.app INFO : Camera processor started for back: 216 [2021-11-24 18:32:51] frigate.app INFO : Camera processor started for entrance: 217 [2021-11-24 18:32:51] frigate.app INFO : Camera processor started for side: 218 [2021-11-24 18:32:51] frigate.app INFO : Capture process started for back: 219 [2021-11-24 18:32:51] frigate.app INFO : Capture process started for entrance: 220 [2021-11-24 18:32:51] frigate.app INFO : Capture process started for side: 221 [2021-11-24 18:32:51] ws4py INFO : Using epoll [2021-11-24 18:32:51] ws4py INFO : Using epoll [2021-11-24 18:32:51] detector.coral INFO : Starting detection process: 213 [2021-11-24 18:32:51] frigate.edgetpu INFO : Attempting to load TPU as usb [2021-11-24 18:32:54] frigate.edgetpu INFO : TPU found
  6. Issue with remote nodes if someone could please help? Main Unraid server working great with local node and local GPU I have another Unraid server I am using as a remote node, everything connects, the transcode takes place, but it cannot move the file. All the paths are the same for media and transcode cache and I can move files manually via the server, but the remove tdarr node cannot.
  7. Hey All Got myself a ds2246 for my slave unraid server With a NetApp 111-00341+F2 4-Port HBA card Unraid shows the card in system devices as [11f8:8001] 05:00.0 Serial Attached SCSI controller: PMC-Sierra Inc. Device 8001 (rev 05) I have connected port 1 of the HBA to the left port of the first iom6 controller and port 3 to the first port on the right 2nd iom6 controller No drives in unraid No experience with these shelves or hba's Any help would be much appropriated. Unraid log from startup Feb 3 19:57:11 Slave emhttpd: shcmd (88): umount /mnt/cache Feb 3 19:57:11 Slave emhttpd: shcmd (89): rmdir /mnt/cache Feb 3 19:57:11 Slave root: Stopping diskload Feb 3 19:57:11 Slave kernel: mdcmd (37): stop Feb 3 19:57:11 Slave kernel: md1: stopping Feb 3 19:57:11 Slave emhttpd: shcmd (91): rm -f /boot/config/forcesync Feb 3 19:57:11 Slave emhttpd: shcmd (92): sync Feb 3 19:57:11 Slave emhttpd: Starting services... Feb 3 19:57:11 Slave emhttpd: shcmd (94): /etc/rc.d/rc.samba start Feb 3 19:57:11 Slave root: Starting Samba: /usr/sbin/smbd -D Feb 3 19:57:11 Slave root: /usr/sbin/nmbd -D Feb 3 19:57:11 Slave smbd[11264]: [2021/02/03 19:57:11.843226, 0] ../../lib/util/become_daemon.c:135(daemon_ready) Feb 3 19:57:11 Slave smbd[11264]: daemon_ready: daemon 'smbd' finished starting up and ready to serve connections Feb 3 19:57:11 Slave root: /usr/sbin/wsdd Feb 3 19:57:11 Slave nmbd[11269]: [2021/02/03 19:57:11.854199, 0] ../../lib/util/become_daemon.c:135(daemon_ready) Feb 3 19:57:11 Slave nmbd[11269]: daemon_ready: daemon 'nmbd' finished starting up and ready to serve connections Feb 3 19:57:11 Slave root: /usr/sbin/winbindd -D Feb 3 19:57:11 Slave winbindd[11279]: [2021/02/03 19:57:11.894502, 0] ../../source3/winbindd/winbindd_cache.c:3203(initialize_winbindd_cache) Feb 3 19:57:11 Slave winbindd[11279]: initialize_winbindd_cache: clearing cache and re-creating with version number 2 Feb 3 19:57:11 Slave winbindd[11279]: [2021/02/03 19:57:11.895051, 0] ../../lib/util/become_daemon.c:135(daemon_ready) Feb 3 19:57:11 Slave winbindd[11279]: daemon_ready: daemon 'winbindd' finished starting up and ready to serve connections Feb 3 19:57:11 Slave emhttpd: shcmd (95): cp /tmp/emhttp/smb.service /etc/avahi/services/smb.service Feb 3 19:57:11 Slave avahi-daemon[4933]: Files changed, reloading. Feb 3 19:57:11 Slave avahi-daemon[4933]: Loading service file /services/smb.service. Feb 3 19:57:11 Slave emhttpd: no mountpoint along path: /mnt/user/system/docker Feb 3 19:57:12 Slave avahi-daemon[4933]: Service "Slave" (/services/smb.service) successfully established. Feb 3 20:00:01 Slave crond[1712]: exit status 3 from user root /usr/local/sbin/mover &> /dev/null Feb 3 20:06:21 Slave kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000d0000-0x000d3fff window] Feb 3 20:06:21 Slave kernel: caller _nv000709rm+0x1af/0x200 [nvidia] mapping multiple BARs Feb 3 20:06:22 Slave kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000d0000-0x000d3fff window] Feb 3 20:06:22 Slave kernel: caller _nv000709rm+0x1af/0x200 [nvidia] mapping multiple BARs Feb 3 20:06:23 Slave kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000d0000-0x000d3fff window] Feb 3 20:06:23 Slave kernel: caller _nv000709rm+0x1af/0x200 [nvidia] mapping multiple BARs Feb 3 20:06:24 Slave kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000d0000-0x000d3fff window] Feb 3 20:06:24 Slave kernel: caller _nv000709rm+0x1af/0x200 [nvidia] mapping multiple BARs Feb 3 20:08:21 Slave ntpd[1691]: receive: Unexpected origin timestamp 0xe3c58135.37b28728 does not match aorg 0000000000.00000000 from [email protected] xmt 0xe3c58135.3b6264f3 Feb 3 20:09:24 Slave ntpd[1691]: receive: Unexpected origin timestamp 0xe3c58174.37b2c826 does not match aorg 0000000000.00000000 from [email protected] xmt 0xe3c58174.3b9fe1fd Feb 3 20:26:13 Slave ntpd[1691]: receive: Unexpected origin timestamp 0xe3c58565.533131a4 does not match aorg 0000000000.00000000 from [email protected] xmt 0xe3c58565.585b237e
  8. Hey All Having an issue with rclone, using: Unraid Nvidia 6.8.3 Rclone 1.53.3 Have a simple user script which is set to rclone sync -v /mnt/user/Media/ /mnt/disks/offload This syncs my media to a synlogy backup box, which is mounted with unassigned devices to the offload folder Error is pretty obvious as Failed to copy write, no space left on device. However the device has 29TB free Any thoughts would be much appreciated. This has been working great for a few years since this month.
  9. Hey All A strange one, at least for me My Google Wifi devices run on 192.168.86.x My main network is 192.168.1.x Unraid on the main network, cannot access the 192.168.86.x part of my network. Any browser on the main network has no issue, so my guess is I need to add something to the unraid network routing table. As currently I cannot access 192.168.86.x on either a docker or VM Any help would be awesome.
  10. Finally have this working, thanks all, took me a while It makes me wonder if its possible to add an additional list of domains perhaps with shorter custom cache times, could be useful in some situations.
  11. Hey Buddy nslookup gives me my router IP of 192.168.1.1 Its DNS is set to primary 192.168.1.69 which is lancache and 192.168.1.3 which is pihole Both google and steamcontent.com show my router ip on nslookup too
  12. Thanks James, i have the outbound working now with the secondary dns Nothing in the cache still and sure here you go
  13. hey buddy i did, loads of times. cant even download the epic game launcher, its like all requests are looping
  14. Hi All Hope everyone is well. Finally took the time to get lancache docker running. I can see all the dns calls in the log for steam, epic and my web traffic etc with the way i have it setup no files in the cache folder, tried various games with a few providers, i see the data in the log for the cache urls its accessing, such as 29-Apr-2020 20:33:27.494 client @0x15358023cda0 192.168.1.1#52049 (steamcdn-a.akamaihd.net): query: steamcdn-a.akamaihd.net IN A + (192.168.1.69) 29-Apr-2020 20:20:05.233 client @0x1533e024b530 192.168.1.1#52049 (cache26-lhr1.steamcontent.com): query: cache26-lhr1.steamcontent.com IN A + (192.168.1.69) 29-Apr-2020 20:20:04.834 client @0x1533e02116f0 192.168.1.1#52049 (lancache.steamcontent.com): query: lancache.steamcontent.com IN A + (192.168.1.69) But no files, only things i see in the docker log that look like are errors are Epic now says trouble connecting not sure if thats related proxy_next_upstream error timeout http_404; proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504; There is also something odd when i search for the above domain in the docker log ~.*£££.*?#WARNING:Originhasbeenseendownloadinghttpsclientdownloadsonorigin-a.akamaihd.net.Asolutionshouldbeinplacetoforwardhttpstotheoriginserver(egsniproxy) origin; ~.*£££.*?origin-a.akamaihd.net origin; I am out of ideas to be honest.
  15. Thanks for the hard work guys 😀 Long time unraid user, plex docker P2000, followed the guide, works like a charm. Thanks again.
  16. great to know thanks buddy my issue now is its not where it would normally be
  17. Hi Everyone I need to get to the /etc/modprobe.d/ directory and create a file called, lets say card and in that file i need modprobe saa7134 card=2,2,2,2,2,2,2,2 for my cctv capture card to function Any ideas, never tried to access or ssh in to a docker.
  18. Well amd card in slot 2 and still the same issues. one thing i noticed on just windows the card wont install either, other cards will. off to the shop tomorrow to grab something new.
  19. Hiya, I actually haven't no, it is something to think about for sure. I have the AMD card coming in an hour from Amazon so that will be my first test i think. As you say it could be the 1050 or the main slot itself. Its a workstation board so it does seem pretty good is most aspects.
  20. Hey, yeah using OMVF and seabios is the only way any of it works.
  21. Actually its quite werid, i installed 2 cards, both nvidia, works fine on sea bios and uses the card in the main slot as i like and have set the other to be the main in the bios. but as soon as i i install drivers black screen. Actually the same in windows without unraid and 2 cards. I am going to try and amd card tomorrow in the 2nd slot. Then unraid and if not, one card and back to windows for me, shame indeed.
  22. It is yes, and no displays are detected other than the main one.
  23. Hi Guys, hope I can get some help, building a new machine for work. Based on a Z620 workstation, all is running well apart from the GPU passthrough, well sort of. The display port on the 1050ti works great, the other ports dont work at all. There is no on board, so Ive had to put my 1050ti in the main slot for my pass through, then a cheap 750 in the 2nd slot for unraid (the case will not allow the 1050 in the bottom slot there is no space for it) my Bios supports primary and secondary GPU so this is set accordingly, the VM boots and the display port on the 1050ti outputs perfectly fine. But the 2 DVI on the card or the HDMI does not, how to do i fix this any ideas? Ive never done GPU pass through but followed the spaceinvader video people seem to use which got me to this stage.
  24. I have setup my vpn with tunnelbear works great, however plex. from everything ive read and spent weeks trying to do is this in the config file # PLEX over WAN route route plex.tv 255.255.255.255 net_gateway route my.plexapp.com 255.255.255.255 net_gateway route myplex.tv 255.255.255.255 net_gateway now this does exclude plex, well in a way, when i go in to plex, i see my actual ip at least, my ports are forwarded as they have always been, but still no outside connection for plex until i turn the vpn off. my network settings while connected look as so and the connection log from openvpn is also below. also plex logs i have no idea which one, way too many and none with the right timestamp i am just stumped. default via 172.18.12.9 dev tun5 34.248.236.84 via 192.168.1.1 dev br0 34.252.129.181 via 192.168.1.1 dev br0 34.252.160.54 via 192.168.1.1 dev br0 34.253.32.64 via 192.168.1.1 dev br0 52.17.222.85 via 192.168.1.1 dev br0 52.212.88.40 via 192.168.1.1 dev br0 54.77.197.74 via 192.168.1.1 dev br0 54.171.208.164 via 192.168.1.1 dev br0 159.89.101.187 via 192.168.1.1 dev br0 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 172.18.12.1 via 172.18.12.9 dev tun5 172.18.12.9 dev tun5 proto kernel scope link src 172.18.12.10 192.168.1.0/24 dev br0 proto kernel scope link src 192.168.1.9 metric 213 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown Protocol Route Gateway Metric Delete IPv4 default 172.18.12.9 1 IPv4 34.248.236.84 192.168.1.1 1 IPv4 34.252.129.181 192.168.1.1 1 IPv4 34.252.160.54 192.168.1.1 1 IPv4 34.253.32.64 192.168.1.1 1 IPv4 52.17.222.85 192.168.1.1 1 IPv4 52.212.88.40 192.168.1.1 1 IPv4 54.77.197.74 192.168.1.1 1 IPv4 54.171.208.164 192.168.1.1 1 IPv4 159.89.101.187 192.168.1.1 1 IPv4 172.17.0.0/16 docker0 1 IPv4 172.18.12.1 172.18.12.9 1 IPv4 172.18.12.9 tun5 1 IPv4 192.168.1.0/24 br0 213 IPv4 192.168.122.0/24 virbr0 1 IPv6 2000::/3 tun5 1024 IPv6 fde4:8dba:82e2::/64 tun5 256