flexage

Members
  • Posts

    14
  • Joined

  • Last visited

Everything posted by flexage

  1. Downloading Linux Iso's from Usenet is a noble pastime 👌🏻
  2. Thanks man, I've installed using the version tag as you suggested 👍🏻 Much appreciated
  3. Thanks for the good info my dude, reassuring to know that it works with the hardware, I have just had a gigabit line switched on - so will be keeping DPI disabled (I had gleaned enough to know the usg wasn't beefy enough to handle this). Will give the 6.x version a go, since this is a fresh network I can experiment a bit without much consequence. Thanks for the tip around the current bandwidth and the android app, very useful stuff to know.
  4. Hi, if I may request the benefit of group wisdom, as a new UniFi user, who's just getting started with a budget setup to test the waters - which version/tag would you recommend when setting up a new network consisting of a USG and 2 x AC LR Access Points? I'm afraid I'm currently bereft of existing knowledge on the subject... does the USG even support the 6.x version of the app, or am I restricted to the 5.9.x version? TIA
  5. I just had a look at a backup of my usb as i couldn't remember off the top of my head. The file I edited to fix this issue was: `/config/vfio-pci.cfg` Before modifying ANY file on your USB, I suggest you make a backup of that file. The contents of that file for me were very short, with something similar to the following: BIND=0000:25:00.0|10de:0f00 0000:25:00.1|10de:0bea The above references 2 hardware addresses to be bound to vfio, each separated by a space. I either deleted that whole line, or deleted everything after `BIND=` I'm sorry I can't remember exactly, it's been 2 months since I had this issue. If you make sure to take a backup of that file before you edit it, then I don't think there's any harm in making changes. I hope this helps you get back up and running
  6. Nevermind, I found a fix. I pulled the thumb drive from the server, and stuck it in a laptop. Checking over the config files, I could see some VFIO device bindings that were probably pointing at addresses that had changed since being bound. So I cleared off the bindings in the config file, reattached the thumb drive, and was able to boot successfully.
  7. Hey all, Like the title says, I was attempting to see the native IOMMU grouping of my pci devices, so I disabled the PCIe ACS Override setting and rebooted. The Web Gui didn't come back up, so I switched on the monitor, and could see the CLI login prompt, and could see an IPV4 address hadn't been assigned. I attempted to log in, but nothing I typed would appear on the screen. Since I hadn't seen any hard drive led activity during the boot I felt confident that the array hadn't come online, so I pressed the hardware power button and did a graceful shutdown. I rebooted, and confirmed that I could use the keyboard to access the bios, all good there. I continued with the boot, and selected the option to boot into safe mode, with the gui enabled - hoping that I can get access to the dashboard and check things over. Unfortunately, the same issue with the keyboard not working persists here too. I've tried 3 different keyboards in total, and tried both having the keyboard connected at boot, and unplugging and reattaching the keyboard after the gui loads... still no luck. Does anybody have any ideas?
  8. It sounds like you're suffering from the AMD GPU reset bug, it's a known issue. I've heard reports that the `very` latest AMD cards have fixed this bug, but can't confirm. I'm surprised that you haven't had any replies to this topic after so long. A sure fire way to get your AMD GPU back up and available is to reboot your Unraid box. If you don't want to have to reset, SpaceInvaderOne has a method by which you make a script that removes the GPU from the system, then puts it into standby, then re-initialises the GPU when you wake the system back up. I tried it but didn't have great success, and I felt dirty afterwards 😂 Here's a link to SIO's video about the process:
  9. So, with the recent inclusion of GPU drivers in Unraid, it seems a lot of things should now be possible. We already had Nvidia GPU support, by way of the awesome community Nvidia Build, but now we also have official driver support for AMD GPU's. I wanted to pass my AMD Gpu through to a Docker Container for transcoding, the same way many folks have been doing with Nvidia GPU's. Only thing is (and forgive me if I'm mistaken), there seems to be a complete lack of documentation of how this all fits together. Like on the Unraid docs, I've not seen a single mention of GPU's and Docker in the same context. Searching here on the forums, and on Google, brings up a whole bunch of confusing results, 99% of which are from before the official driver inclusion and specific to the previous Nvidia Community Builds. Ultimately, I think a write-up of information around Docker and GPU's in the official documentation would be the way forward, but immediately I have a couple of queries I was hoping to find answers to, namely: In my container configuration, and I required to add any new "Vars" or "Devices" in order to successfully pass through a GPU using the new Unraid Drivers (I would assume yes, so I have attempted to pass through my GPU device - `/dev/dri`) A lot of older topics on this mention that you had to add an extra run parameter to your Docker Container (namely --runtime=nvidia) is this still required? And is there an AMD equivalent (--runtime=amd)? My container has an FFMPEG build that supports VAAPI, so an AMD GPU should work with it, but the device fails to initialise for FFMPEG's HW Detect - I've read that the same drivers that are available for the GPU on the Unraid Host need to be added to the Docker Container... What is the process for copying the drivers in there? Thanks for your time reading this, it would be really good to get answers to my 3 questions above. I still believe that the need to ask these questions could have been avoided with solid documentation though, so I'd like to cast a vote in that direction.
  10. So with the latest Unraid beta release (6.9 beta 35), GPU drivers have been added to the OS. I had a quick stab at getting an AMD GPU passed through to the lsio Emby Docker, but have run in to an issue. I enabled the AMDGPU driver on Unraid, I installed the lsio Emby Docker with the `/dev/dri/` device added. Emby Server shows my GPU available for transcoding (I have Emby Premiere so no restriction on transcode). I started a transcode job, and it was using the CPU, not GPU to transcode. Looking at the transcode log, it looks like it attempted to open the GPU for transcoding, but failed and fell back to CPU. I've attached the Emby transcode log, but the first error that occurs is: /home/embybuilder/Buildbot/x64/libdrm-x64/staging/share/libdrm/amdgpu.ids: No such file or directory Then FFMPEG fails with some sort of I/O error. I had a look in the docker container, and the `/home/embybuilder` directory did not exist... Is there something else I should be doing to get this working? ffmpeg-transcode-646aefab-ad63-4910-a5d6-fc108c6b9031_1.txt
  11. Sorry, GPU passthrough newbie here. If I were to enable the AMDGPU driver via the method detailed in the OP, am I to expect the GPU to just be available for transcoding in official docker containers from Emby and Plex? If not, are there any adjustments that I or official maintainers are expected to make to these docker containers to enable GPU transcoding? Thanks in advance
  12. Hey fellow Unraiders, On my Unraid server, I've got an UEFI Ubuntu Server 20.04 VM running Emby. I've followed SpaceInvaderOne's Advanced GPU Passthrough Techniques guide, and have the AMD RX570 8GB passed through. I also installed the latest Radeon Software for Linux / AMDGPU-PRO drivers. The RX570 is the primary and only GPU installed in the system at present time. The VM boots successfully (although no Tiano Core screen, just the Ubuntu boot up logs whirling by), and I can begin to GPU Transcode in Emby. However, after a while the GPU locks up, especially if i start a second transcode or seek to a new video position - Ubuntu and Emby continue to run, but the GPU appears to be hung and I can no longer run `radeontop` or start and GPU related activity such as transcoding. I made sure to pass through the RX570 HDMI sound card too, and made sure to put them on the same hardware slot in KVM: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x29' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x09' slot='0x01' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x29' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x09' slot='0x01' function='0x01' multifunction='on'/> </hostdev> Also, both the vga and audio were already on an isolated IOMMU group together, no need for override right? IOMMU group 15: [1002:67df] 29:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere [Radeon RX 470/480/570/570X/580/580X/590] (rev ef) [1002:aaf0] 29:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere HDMI Audio [Radeon RX 470/480 / 570/580/590] When looking at the Emby Server transcode logs, I'm seeing some messages that I'm not familiar with, although they seem to do with no longer being able to access the GPU: >> ThrottleBySegmentRequest: RequestPosition: 00:05:30 - TranscodingPosition: 00:06:21 - ThrottleBuffer: 51s (Treshold: 120s) >> ThrottleBySegmentRequest: RequestPosition: 00:05:30 - TranscodingPosition: 00:06:21 - ThrottleBuffer: 51s (Treshold: 120s) >> ThrottleBySegmentRequest: RequestPosition: 00:05:30 - TranscodingPosition: 00:06:21 - ThrottleBuffer: 51s (Treshold: 120s) >> ThrottleBySegmentRequest: RequestPosition: 00:05:30 - TranscodingPosition: 00:06:21 - ThrottleBuffer: 51s (Treshold: 120s) >> ThrottleBySegmentRequest: RequestPosition: 00:05:30 - TranscodingPosition: 00:06:21 - ThrottleBuffer: 51s (Treshold: 120s) amdgpu: amdgpu_cs_query_fence_status failed. amdgpu: amdgpu_cs_query_fence_status failed. amdgpu: amdgpu_cs_query_fence_status failed. 16:23:55.588 [mpegts @ 0x1a58a40] H.264 bitstream error, startcode missing, size 0 amdgpu: amdgpu_cs_query_fence_status failed. amdgpu: The CS has been cancelled because the context is lost. amdgpu: The CS has been cancelled because the context is lost. 16:23:55.595 [mpegts @ 0x1a58a40] H.264 bitstream error, startcode missing, size 0 16:23:55.595 frame= 1228 fps= 18 q=-0.0 size= 30086kB time=00:06:21.86 bitrate=4789.8kbits/s throttle=off speed=0.752x amdgpu: The CS has been cancelled because the context is lost. amdgpu: The CS has been cancelled because the context is lost. 16:23:55.600 [mpegts @ 0x1a58a40] H.264 bitstream error, startcode missing, size 0 amdgpu: The CS has been cancelled because the context is lost. amdgpu: The CS has been cancelled because the context is lost. 16:23:55.605 [mpegts @ 0x1a58a40] H.264 bitstream error, startcode missing, size 0 amdgpu: The CS has been cancelled because the context is lost. amdgpu: The CS has been cancelled because the context is lost. 16:23:55.622 [mpegts @ 0x1a58a40] H.264 bitstream error, startcode missing, size 0 amdgpu: The CS has been cancelled because the context is lost. amdgpu: The CS has been cancelled because the context is lost. 16:23:55.629 [mpegts @ 0x1a58a40] H.264 bitstream error, startcode missing, size 0 amdgpu: The CS has been cancelled because the context is lost. amdgpu: The CS has been cancelled because the context is lost. 16:23:55.633 [mpegts @ 0x1a58a40] H.264 bitstream error, startcode missing, size 0 amdgpu: The CS has been cancelled because the context is lost. amdgpu: The CS has been cancelled because the context is lost. I've also tried the same setup as above, but with a Radeon 550 2GB in the system, and get exactly the same behaviour and error logs. For shits and giggles I set up a Win 10 VM with a GPU passed through, and went to install Emby, however after I'd installed the official Radeon Drivers and opened chrome the VM locked up. Any ideas fellow raiders? Would having 2 GPUs in the system, i.e. the lesser 550 in the first PCIE slot (not-passed though to any vm), and the RX570 in another PCIE slot (passed through to emby/ubuntu VM) help at all? TIA
  13. Over the summer there was some extreme heat that I didn't mitigate very well, and every single one of my drives in my home unraid server experienced some form of damage. Some of them failed instantly, others have been slowly failing in the time since then. So far, I've already decommissioned a few storage array drives, and had to ditch the SSD cache drives as they also too suffered heat damage. Surprisingly, I haven't lost very much data at all - just a few precious memories that I fortunately have on an offsite backup. I've been trying to free up some time to replace the entire array and install a new cache drive, and as of this weekend I have a few days to make it happen. I've got a bunch of WD drives arriving tomorrow for shucking, along with a single 512GB SSD for the cache drive. I have some data on the storage array that I'd like to move across to the new drives, there's also some VM's and docker apps that I'd like to move across. I wanted to put all appdata on the faster cache drive, and possibly the docker/vm bits. My first thought on how to do this is just remove to my failing drives from the array, and use the Unmounted Devices plugin to access the data in them for copying to the new array. This sounds like a straight-forward method to migrate the data from what I've learned about unraid so far. Still thought I'd ask here on the forums to see if anyone thinks this method would be ok? Is there any other preferred way to accomplish this migration? TIA