Leaderboard

Popular Content

Showing content with the highest reputation on 03/10/19 in all areas

  1. We're still working on this and the Unraid updates, all is compiling fine, with drivers and kernel modules on host, but when starting containers stuff just isn't being carried through. On the plus side, when we do figure it out, we'll have a much better understanding of how everything works under the hood I guess...... Sent from my Mi A1 using Tapatalk
    3 points
  2. Let's start with my setup: Motherboard: ASUS X470-F Gaming (Socket AM4) CPU: AMD Ryzen 7 2700X (does not have integrated graphics) Memory: 32GB Storage: 2 x 512GB NVME SSDs (no RAID) GPU0 (top PCIE slot): EVGA GTX 1050 GPU1 (secondary PCIE slot): EVGA GTX 1080Ti Hybrid Network: Single onboard gigabit nic Everyone's got something different, but the above it working for me just fine. Here are the hurdles I faced during this journey: * My Native Instruments Komplete Audio 6 would panic and reset itself in a Windows 10 guest. - To fix this, I had to look at my IOMMU groups and find a USB controller that doesn't share the same IOMMU group as anything else important, then "stub" it to keep the Unraid underlying OS from claiming it during boot. * As a result of the above, I became limited in which USB ports I could use for VMs - But, to be fair, I can now unplug/replug anything I want on the Windows 10 guest. That's the benefit of binding an entire USB controller. * I found I was completely unable to pass through my primary graphics card to a Linux guest. - To fix this, I had to stub the card, AND - and this is the most important bit - I had to disable EFI frame buffering (or something like that). These bits are all done in the Unraid kernel startup line. * My Linux guest had stuttery, demonic audio over HDMI and video would lag when audio was playing. - To fix this, I had to enable some intel sound options inside the Linux guest, which I found amusing since nothing in my machine is Intel (something about the virt layer needing this). Let's dig into each. USB audio interface runs terribly when passed through This happens because of the emulation layer. The only way will be to find out which USB controller can be stubbed. To figure out which one, get on Unraid and open a terminal and run: lspci | grep USB Take note of the IDs there, then run: for iommu_group in $(find /sys/kernel/iommu_groups/ -maxdepth 1 -mindepth 1 -type d);do echo "IOMMU group $(basename "$iommu_group")"; for device in $(\ls -1 "$iommu_group"/devices/); do if [[ -e "$iommu_group"/devices/"$device"/reset ]]; then echo -n "[RESET]"; fi; echo -n $'\t';lspci -nns "$device"; done; done In this big list you're looking for something like this: IOMMU group 20 0b:00.3 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] USB 3.0 Host controller [1022:145f] Notice how this USB controller is the ONLY device in this IOMMU group (which just so happens to be group 20 on my machine). This means that we should be able to completely isolate it from the Unraid host OS so we can let the VM claim it completely and without question. To stub it, we modify the kernel startup line. You can either use terminal and modify /boot/syslinux/syslinux.cfg or you can use the Unraid web UI and go to Main -> click "Flash" under 'Boot Device' -> then edit the 'Unraid OS' section under 'Syslinux Configuration' Here's what it looks like when I stub ONLY the above USB controller: kernel /bzimage append initrd=/bzroot vfio-pci.ids=1022:145f Save that change, then reboot Unraid. You can now edit your VM and at the bottom, uncheck the USB devices and instead pass through the USB hub itself. That should fix it. I can't pass through my GPU in the top slot because Unraid is using it Correct, but we can be aggressive about it and do it anyway. Start by finding the device IDs using lspci: root@funraid:~# lspci | grep VGA -A1 09:00.0 VGA compatible controller: NVIDIA Corporation GP107 [GeForce GTX 1050 3GB] (rev a1) 09:00.1 Audio device: NVIDIA Corporation GP107GL High Definition Audio Controller (rev a1) 0a:00.0 VGA compatible controller: NVIDIA Corporation GP102 [GeForce GTX 1080 Ti] (rev a1) 0a:00.1 Audio device: NVIDIA Corporation GP102 HDMI Audio Controller (rev a1) Above, you can see that my GTX 1050 and its associated audio interface are on 09:00.(0|1) So now I need to find out the device IDs: root@funraid:~# lspci -n | grep 09:00. 09:00.0 0300: 10de:1c83 (rev a1) 09:00.1 0403: 10de:0fb9 (rev a1) Awesome. Let's stub those, too... but remember! We need to also disable EFI frame buffering (I think that's what it's called). This means that when you boot Unraid next, you WILL NOT have any console output after the main bootloader (the blue screen) completes - your main GPU that you're passing through (in this case, my GTX 1050) will appear to freeze at that bootloader. If you see the OS even remotely start to boot, you didn't do this right. In addition to the USB hub, we'll also stub the GPU and its audio interface and disable EFI FB: kernel /bzimage append initrd=/bzroot vfio-pci.ids=1022:145f,10de:1c83,10de:0fb9 video=efifb:off Save that and reboot Unraid, and all will be well. You might - and this is very unlikely, but - you might need to supply a VGA bios from techpowerup for your card, or worst case put the card you are trying to use here into PCIE slot 2 and dump its BIOS from the Unraid OS, but I really don't think you'll have to. My Linux guest has demonic, stuttery audio and video when trying to play videos and audio is coming over HDMI through my nVidia card. Me too, and I dug for a bit and found out that I can simply give the Linux guest a modprobe line to clear up this behavior. Inside of the Linux guest's OS (NOT UNRAID!), create a conf file in /etc/modprobe.d/... and use the EXACT same data I use below. Here's mine: [root@archer ~]# cat /etc/modprobe.d/snd-hda-intel.conf options snd-hda-intel enable_msi=1 That's it. Literally one line. Yeah, you have to type that in word for word. Don't worry, I know, you're not running anything Intel - but trust me, it works. Save that and reboot the Linux guest OS. Everything should be in order. I really hope this all helps someone. Thanks! :o)
    1 point
  3. Are this disks actually spinning? You don't say what make/model they are but try Googling "SATA 3.3V pin" and see if that helps.
    1 point
  4. Give this plugin a try. It still needs some additional error/version checking, but it definitely works on 6.6.6 and 6.6.7, and its much more simple than the "auto" plugin that we all have been using for a while. https://raw.githubusercontent.com/StevenDTX/unRAID-open-vm-tools/master/openVMTools_compiled.plg
    1 point
  5. OK, just tried this and as expected no harm was done to the SSD, in fact and since there isn't a vdisk to delete nothing was deleted on the SSD, after doing it the SSD still contains the ntfs partition:
    1 point
  6. Nextcloud is superb, there is a space invader one video showing how to setup using reverse proxy with lets encypt. I highly recommend going that route.
    1 point
  7. 1 point
  8. Download Community Applications plugin, this will add new apps Tab then you can search for any docker you want to install. Sent from my SM-G930F using Tapatalk
    1 point
  9. On RC5 ftp will not start Removed plugin (reboot) and reinstalled (reboot) and can not get it to start Rolled back to 6.6.7 and FTP starts without issues Upgrade back to RC5 and the FTP will not start I tried the file listed above (ProFTPd-SlrG-Dependency-1.6_x64.tar.gz) and edited ProFTPd.plg and the plugin does not even appear on the server upon reboot unRaid is not spitting out any errors the only thing the unRAID log file shows is Mar 10 01:24:05 unRAID emhttpd: req (3): cmd=%2Fplugins%2FProFTPd%2Fscripts%2Frc.ProFTPd&arg1=buttonstart&runCmd=Start&csrf_token=**************** Mar 10 01:24:05 unRAID emhttpd: cmd: /usr/local/emhttp/plugins/ProFTPd/scripts/rc.ProFTPd buttonstart Mar 10 01:24:08 unRAID sudo: root : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/local/SlrG-Common/usr/local/sbin/proftpd -c /etc/proftpd.conf Says ok when starting jails Says ok when starting ProftpD Screen refreshes and status: stopped So its trying to start it, but not other info of what is going on is being shown as far as I can see For what its worth my mount script does run, so we know it gets that far at least
    1 point
  10. Probably still best to ask if they can add support to their docker, since it should just "not work" if nvidia-smi isn't available. But, if they are opposed, have a user script you can run on a schedule: #!/bin/bash con="$(docker ps --format "{{.Names}}" | grep -i netdata)" exists=$(docker exec -i "$con" grep -iqe "nvidia_smi: yes" /etc/netdata/python.d.conf >/dev/null 2>&1; echo $?) if [ "$exists" -eq 1 ]; then docker exec -i "$con" /bin/sh -c 'echo "nvidia_smi: yes" >> /etc/netdata/python.d.conf' docker restart "$con" >/dev/null 2>&1 echo '<font color="green"><b>Done.</b></font>' else echo '<font color="red"><b>Already Applied!</b></font>' fi
    1 point
  11. Got this working in netdata! So, don't worry about grabbing a special version, the version from Community Apps is fine. Steps to reproduce: Grab the docker from Community Apps. During the initial container install switch to advanced view, and add --runtime=nvidia to the end of the list. Add a new variable "NVIDIA_VISIBLE_DEVICES" with the value set to "all" Click done, and let the docker install. Open a console for the docker. echo "nvidia_smi: yes" >> /etc/netdata/python.d.conf Restart the docker. Enjoy.
    1 point
  12. Open a console for the docker and see what the output of which nvidia-smi is. That will tell you where the nvidia-smi binary is located.
    1 point
  13. Currently using a P2000 myself. The decode patch has basically no drawbacks other than being unsupported. Eventually the nvdec support will be a standard plex feature, and the patch will become obsolete at that point. There is one possible downfall for the patch currently, which I need to figure out a "proper" way to approach: If for some reason your docker starts, and the nvidia card/driver is not working, your transcoder will cease to function entirely rendering playback impossible outside of native DirectPlay. This is a pretty impossible circumstance if you've configured everything and the nvenc side of the house works fine - but is certainly of concern because hardware failures occur, and changing to mainline unraid from the unraid-nvidia branch would also break Plex with the current script.
    1 point
  14. Wrong command. Try /etc/rc.d/rc.nginx restart
    1 point
  15. It's impossible for anyone to know where your drives are located as the configurations out there varies a lot, so you must assign them manually the first time. So no automagic is happening there. For the error it looks like it might detect a false blank or something odd in the SMART rotational data. I'll fix this later.
    1 point
  16. Thank you i did not know about that option
    1 point
  17. Respect. Thank you for bringing security first and release a patch for this. Cheers.
    1 point
  18. After a ton of Google-fu I was able to resolve my problem. TL;DR Write cache on drive was disabled found an page called How to Disable or Enable Write Caching in Linux. The artical covers both ata and scsi drives which i needed as SAS drive are scsi and are a total different beast. root@Thor:/etc# sdparm -g WCE /dev/sdd /dev/sdd: HGST HUS726040ALS214 MS00 WCE 0 [cha: y, def: 0, sav: 0] This shows that the write cache disabled root@Thor:/etc# sdparm --set=WCE /dev/sdd /dev/sdd: HGST HUS726040ALS214 MS00 This enables it and my writes returned to the expected speeds root@Thor:/etc# sdparm -g WCE /dev/sdd /dev/sdd: HGST HUS726040ALS214 MS00 WCE 1 [cha: y, def: 0, sav: 0] confirms the write cache has been set Now I'm not total sure why the write cache was disabled under unraid, bug or feature? While doing my googling there was a mention of a kernel bug a few years ago that if system ram was more then 8G it disables the write cache. My current system has a little more then 8G so maybe?
    1 point
  19. No, there's not a single script that does everything you want (at least not that I know of), but there are several scripts that do. I have 3 scripts running included @Squid 's excellent ca appdata backup/restore 1. Ca appdata backup/restore takes care of the docker containers, libvrirt,img and usb backup. I've configured it to run once a week. 2. A script to backup VM's and xml's which I found here thanks to @danioj , @JTok and everyone else that may have contributed. I have my script running once a month. 3. A script to backup my array which I found here ,here and here thanks to @tr0910 , @Hoopster and everyone else that may have contributed. This script I have running once a week. 4. The cronjobs are configured using (once again) @Squid 's excellent user script plugin Now that I have said thank you to all the guys that did all the hard work for me and you, all you have to do is take all the legos and piece them together. The scripting gurus made it easy as they have molded and crafted all the pieces. And now you also have the blueprint/manual which I have listed, so off you go, and have fun building! :)
    1 point