flexage

Members
  • Content Count

    9
  • Joined

  • Last visited

Community Reputation

1 Neutral

About flexage

  • Rank
    Newbie

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I just had a look at a backup of my usb as i couldn't remember off the top of my head. The file I edited to fix this issue was: `/config/vfio-pci.cfg` Before modifying ANY file on your USB, I suggest you make a backup of that file. The contents of that file for me were very short, with something similar to the following: BIND=0000:25:00.0|10de:0f00 0000:25:00.1|10de:0bea The above references 2 hardware addresses to be bound to vfio, each separated by a space. I either deleted that whole line, or deleted everything after `BIND
  2. Nevermind, I found a fix. I pulled the thumb drive from the server, and stuck it in a laptop. Checking over the config files, I could see some VFIO device bindings that were probably pointing at addresses that had changed since being bound. So I cleared off the bindings in the config file, reattached the thumb drive, and was able to boot successfully.
  3. Hey all, Like the title says, I was attempting to see the native IOMMU grouping of my pci devices, so I disabled the PCIe ACS Override setting and rebooted. The Web Gui didn't come back up, so I switched on the monitor, and could see the CLI login prompt, and could see an IPV4 address hadn't been assigned. I attempted to log in, but nothing I typed would appear on the screen. Since I hadn't seen any hard drive led activity during the boot I felt confident that the array hadn't come online, so I pressed the hardware power button and did a graceful shutdown.
  4. It sounds like you're suffering from the AMD GPU reset bug, it's a known issue. I've heard reports that the `very` latest AMD cards have fixed this bug, but can't confirm. I'm surprised that you haven't had any replies to this topic after so long. A sure fire way to get your AMD GPU back up and available is to reboot your Unraid box. If you don't want to have to reset, SpaceInvaderOne has a method by which you make a script that removes the GPU from the system, then puts it into standby, then re-initialises the GPU when you wake the system back u
  5. So, with the recent inclusion of GPU drivers in Unraid, it seems a lot of things should now be possible. We already had Nvidia GPU support, by way of the awesome community Nvidia Build, but now we also have official driver support for AMD GPU's. I wanted to pass my AMD Gpu through to a Docker Container for transcoding, the same way many folks have been doing with Nvidia GPU's. Only thing is (and forgive me if I'm mistaken), there seems to be a complete lack of documentation of how this all fits together. Like on the Unraid docs, I've not seen a single mentio
  6. So with the latest Unraid beta release (6.9 beta 35), GPU drivers have been added to the OS. I had a quick stab at getting an AMD GPU passed through to the lsio Emby Docker, but have run in to an issue. I enabled the AMDGPU driver on Unraid, I installed the lsio Emby Docker with the `/dev/dri/` device added. Emby Server shows my GPU available for transcoding (I have Emby Premiere so no restriction on transcode). I started a transcode job, and it was using the CPU, not GPU to transcode. Looking at the transcode log, it looks like it at
  7. Sorry, GPU passthrough newbie here. If I were to enable the AMDGPU driver via the method detailed in the OP, am I to expect the GPU to just be available for transcoding in official docker containers from Emby and Plex? If not, are there any adjustments that I or official maintainers are expected to make to these docker containers to enable GPU transcoding? Thanks in advance
  8. Hey fellow Unraiders, On my Unraid server, I've got an UEFI Ubuntu Server 20.04 VM running Emby. I've followed SpaceInvaderOne's Advanced GPU Passthrough Techniques guide, and have the AMD RX570 8GB passed through. I also installed the latest Radeon Software for Linux / AMDGPU-PRO drivers. The RX570 is the primary and only GPU installed in the system at present time. The VM boots successfully (although no Tiano Core screen, just the Ubuntu boot up logs whirling by), and I can begin to GPU Transcode in Emby. However, after a while the
  9. Over the summer there was some extreme heat that I didn't mitigate very well, and every single one of my drives in my home unraid server experienced some form of damage. Some of them failed instantly, others have been slowly failing in the time since then. So far, I've already decommissioned a few storage array drives, and had to ditch the SSD cache drives as they also too suffered heat damage. Surprisingly, I haven't lost very much data at all - just a few precious memories that I fortunately have on an offsite backup. I've been trying to free u