flaggart

Members
  • Content Count

    107
  • Joined

  • Last visited

Community Reputation

0 Neutral

About flaggart

  • Rank
    Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Replugged, Unraid-Kernel-Helper Plugin still says no custom loaded modules found. Output of "lsusb -v" attached lsusb.txt
  2. Yes, it is USB. I've rebooted several times and tried all USB ports - I thought I was going mad and not overwriting the boot files properly!! It seems there are a few reports of 5990 not being detected regardless of drivers, so not sure TBS drivers would make any difference, but if it is no hassle for you it would be nice to test The device is listed under lsusb but /dev/dvb does not exist and it does not show up under your plugin.
  3. Ah, I see. I tried LibreELEC but it does not detect my TBS 5990. Thanks anyway.
  4. Hello Sorry if I am being a bit thick here, but all I wan't to do is build 6.8.3 with TBS drivers. There is no mention of how to specify this using env on docker hub or the first post here. Can someone please advise? Thanks
  5. I had the same issue after update. Browser (Vivaldi) would not load the client and had this JS error. It worked fine in Chromium, and was fine after I unpinned the tab in Vivaldi and reloaded the page, ensuring adblock+tracker protection was off.
  6. I am not sure which drive area you mean if you mean the white 4x drive bay directly above the PSU - there should not be any issues with this regardless of cables? if you mean the space with velcro strap to the left of the PSU in the images above, I think I had to orient the drive length ways so that the connectors were pointing towards the cable-management hole through to the other chamber. This matches the more recent image from stuntman83 above.
  7. I retired it towards the end of 2019, so it had a good 4.5 years. I had three fans in the drive chamber using Fan Auto Control to ramp up with drive temp and never had an issue. If anything, the 4 drives in the main chamber were the ones that got the hottest so I'd suggest putting drives there that maybe have less use or fewer platters, whatever makes them less likely to heat up. In terms of vibration / noise, I had the box in the same room I slept in for a few years. I would not describe it as quiet but it wasn't bothersome.
  8. Hello I have been experiencing similar issue when rebooting or shutting down a Windows 10 VM with a RTX 2070 Super and motherboard USB controller passed through. I added pcie_no_flr to my syslinux.cfg as suggested here blindly and noticed no difference. I then took the time to read and understand the thread and realised it was a specific pci-e address and looked up the correct address for my own USB controller which I found from Tools > System Devices. I have reset, hibernated, rebooted and powered offthe VM since and with fingers-crossed no issue so far.
  9. Same here unfortunately
  10. Hi all Over the past few months I have been experiencing complete hard lockup of Unraid and have to power cycle. Each time it happens as a direct result of attempted to reboot the same Windows 10 VM (via the shutdown menu inside the VM, not using web GUI). Syslog as follows: May 4 11:17:41 SERVER kernel: mdcmd (552): spindown 10 May 4 12:25:40 SERVER kernel: mdcmd (553): spindown 0 May 4 13:22:59 SERVER kernel: mdcmd (554): spindown 10 May 4 13:23:00 SERVER kernel: mdcmd (555): spindown 5 May 4 13:52:18 SERVER kernel: mdcmd (556): spindown 9 May 4 16:04:17 SERVER
  11. Surprised there is not more activity from others in this thread, it would be amazing to be able to virtualise integrated graphics in the same way as the rest of the cpu and have multiple vms benefit from it. I would ask you for more details to try myself but I didn't think something like this would be possible and now I am running xeons.
  12. I would also like this. In its absence I have been referring to this post which suggests the same information is available without the tool.
  13. Hi all Like many here (I assume), I am planning on a Ryzen 3000 series build in the next couple of months, to replace my existing setup. This brings with it some choices around chipsets, pci-e lanes, etc. My current setup is: Node 804 (limited to Micro ATX or smaller) i7-2600k Supermirco Micro ATX Broadcom 9207-8i 8 port HBA (PCI-e 3.0 8x) TBS dual DVB tuner (PCI-e 2.0 1x) Switching to Ryzen has certain limitations: Lack of integrated graphics - meaning I have to fit a GPU on a mATX board reducing the number of available slot
  14. I just used the velcro straps that were already there - I think they are supposed to be for cable management, it wasn't the greatest arrangement. I just settled for one drive there.
  15. It is a: OWC OWCMM52T35 - OWC Multi-Mount 3.5 to 5.25 Bracket I have also used a "Nexus DoubleTwin" for the same purpose.