Jump to content

Fiservedpi

Members
  • Posts

    250
  • Joined

  • Last visited

Everything posted by Fiservedpi

  1. So there won't be an rc6 or rc7 release? I'm compulsively checking 20 times a day, should I stop? Checking? Is there and way to get notified of new LIO NVIDIA BUILDS?
  2. So I have a Network attached WD My cloud that I would like to decommission, id like to move all the data (3TB) to my array what would be the best way to do this? The mycloud had USB 3.0 my server has USB 3.0 as well just looking for ideas on how to do this as painless as possible
  3. I'm using a gtx 960 and the fans rarely spin up during transcoding at least 5 streams
  4. Same RC 5 10 Dockers 4 vms br0 for all, ever since rc5 I've noticed one of my br0 Dockers would "stall out" become unreachable, I'm assuming this issue is why. Error does not appear until a VM is started tower-diagnostics-20191108-1356.zip
  5. After installing the container and trying to stop the array I'm now greeted with the server unable to spin down disk 1 anyone know how I can resolve this `````Oct 28 18:19:20 Tower root: umount: /mnt/disk1: target is busy. Oct 28 18:19:20 Tower emhttpd: shcmd (683): exit status: 32 Oct 28 18:19:20 Tower emhttpd: Retry unmounting disk share(s)... Oct 28 18:19:25 Tower emhttpd: Unmounting disks... Oct 28 18:19:25 Tower emhttpd: shcmd (684): umount /mnt/disk1 Oct 28 18:19:25 Tower root: umount: /mnt/disk1: target is busy. Oct 28 18:19:25 Tower emhttpd: shcmd (684): exit status: 32 Oct 28 18:19:25 Tower emhttpd: Retry unmounting disk share```
  6. Ok thanks @Squid That's what I thought my setup is currently 100% reliable so I'll just leave it as is I'll toss the nic in an pfsense scraps box I'm saving
  7. I recently picked up an Intel Pro 1000 PT dual nic would I benefit by using this over the built nic on my Mobo? Asus 310 Prime A
  8. I believe i got it there was an issue with my .iso
  9. Switch to sata cleared up a few errors now just this one
  10. After creating a new VM and clicking VNCRemote I'm presented with this screen can anyone assist, what is it telling me? And what should I do?
  11. after upgrading my gpu im now getting this error in the picture. Attached vm log as well, everything was running before and 10 min after but has stopped working -cpu host,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff,hv_vendor_id=none \ -drive file=/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd,if=pflash,format=raw,unit=0,readonly=on \ -drive file=/etc/libvirt/qemu/nvram/347d621d-20a1-54f8-c6ec-4bcc2f308186_VARS-pure-efi.fd,if=pflash,format=raw,unit=1 \ -m 4608 \ -realtime mlock=off \ -smp 3,sockets=1,cores=3,threads=1 \ -uuid 347d621d-20a1-54f8-c6ec-4bcc2f308186 \ -no-user-config \ -nodefaults \ -chardev socket,id=charmonitor,fd=24,server,nowait \ -mon chardev=charmonitor,id=monitor,mode=control \ -rtc base=localtime \ -no-hpet \ -no-shutdown \ -boot strict=on \ -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x7.0x7 \ -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x7 \ -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x7.0x1 \ -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x7.0x2 \ -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 \ -drive 'file=/mnt/user/domains/Windows 10.3/vdisk1.img,format=raw,if=none,id=drive-virtio-disk2,cache=writeback' \ -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk2,id=virtio-disk2,bootindex=1,write-cache=on \ -drive file=/mnt/user/isos/virtio-win-0.1.160-1.iso,format=raw,if=none,id=drive-ide0-0-1,readonly=on \ -device ide-cd,bus=ide.0,unit=1,drive=drive-ide0-0-1,id=ide0-0-1 \ -netdev tap,fd=27,id=hostnet0,vhost=on,vhostfd=28 \ -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:30:69:2f,bus=pci.0,addr=0x3 \ -chardev pty,id=charserial0 \ -device isa-serial,chardev=charserial0,id=serial0 \ -chardev socket,id=charchannel0,fd=29,server,nowait \ -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \ -device usb-tablet,id=input0,bus=usb.0,port=1 \ -vnc 0.0.0.0:0,websocket=5700 \ -k en-us \ -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 \ -device vfio-pci,host=01:00.0,id=hostdev0,bus=pci.0,addr=0x6 \ -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ -msg timestamp=on 2019-09-14 14:16:36.253+0000: Domain id=2 is tainted: high-privileges 2019-09-14 14:16:36.253+0000: Domain id=2 is tainted: host-cpu char device redirected to /dev/pts/0 (label charserial
  12. so i've got the plugin up and running WITHIN my network but it doesn't work outside my network i've forwarded the correct port but am lost on how to have this set so i can connect when I'm not on my network
  13. And swing and a miss number 2 traded up from a 760 to 960 gtx but same issue "no Cuda enabled devices found" as well as rm-init-adapter failed does this mean these cards can not do transcoding? I've passed it through to the VM successfully (off during testing though)
  14. All of a sudden the plug-in is not actually updating the Dockers all my settings are the same but I have to hit the apply update button manually
  15. meehh slap a putty tunnel on that sum b%#$ch call it a dayy
  16. Gtx760, right I like to be in the most current build guess my card just isn't up to snuff
  17. Is there any way to roll back the driver version 'm getting the error my device is not compatible with this driver version. Do I have to rollback the whole Nvidia-unraid build? ** I should note the GPU appears to be working with-in my VM but am unable to Passthrough to any Dockers "No Cuda enabled devices available" Using built-in stream user interface -> Detected 24 CPUs online; setting concurrency level to 24. WARNING: You do not appear to have an NVIDIA GPU supported by the 430.14 NVIDIA Linux graphics driver installed in this system. For further details, please see the appendix SUPPORTED NVIDIA GRAPHICS CHIPS in the README available on the Linux driver download page at www.nvidia.com. -> Not installing a kernel module; skipping the "is an NVIDIA kernel module loaded?" test. -> Installing NVIDIA driver version 430.14. -> Skipping check for conflicting rpms. WARNING: The nvidia-uvm module will not be installed. As a result, CUDA will not function with this installation of the NVIDIA driver. WARNING: The nvidia-drm module will not be installed. As a result, DRM-KMS will not function with this installation of the NVIDIA driver. WARNING: You specified the '--no-kernel-module' command line option, nvidia-installer will not install a kernel module as part of this driver installation, and it will not remove existing NVIDIA kernel modules not part of an earlier NVIDIA driver installation. Please ensure that an NVIDIA kernel module matching this driver version is installed separately.
  18. Access to the docker the media lives on an Unmounted NAS
  19. Ok thanks @Squid it occurred to me that this appears during party check I'm going to be adding a new 3.5 hd today then run a party check to see if it happens again
  20. I'm experiencing this to have you figured out if this is normal or something nefarious? why would nginx be pub/disk messaging
  21. One Question. Have you exposed this server to the Internet in any fashion? = yes via port forwarding for plex, sab the majority 85% of error.log.1 is this repeated, looking into the nchan portion it seems to be some type of pub sub client that i have not installed i think there is something nefarious happening Jul 10 03:35:45 Tower nginx: 2019/07/10 03:35:45 [error] 7886#7886: MEMSTORE:01: can't create shared message for channel /disks Jul 10 03:35:46 Tower nginx: 2019/07/10 03:35:46 [crit] 7886#7886: ngx_slab_alloc() failed: no memory Jul 10 03:35:46 Tower nginx: 2019/07/10 03:35:46 [error] 7886#7886: shpool alloc failed Jul 10 03:35:46 Tower nginx: 2019/07/10 03:35:46 [error] 7886#7886: nchan: Out of shared memory while allocating message of size 9960. Increase nchan_max_reserved_memory. Jul 10 03:35:46 Tower nginx: 2019/07/10 03:35:46 [error] 7886#7886: *2021857 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/disks?buffer_length=2 HTTP/1.1", host: "localhost" Jul 10 03:35:46 Tower nginx: 2019/07/10 03:35:46 [error] 7886#7886: MEMSTORE:01: can't create shared message for channel /disks
×
×
  • Create New...