Jump to content

lovingHDTV

Members
  • Content Count

    460
  • Joined

  • Last visited

Community Reputation

0 Neutral

About lovingHDTV

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Ok I took a few hours today, created an account, downloaded, installed and created a VM on ESXi. There was a nice walk through here As my goal for a VM solution was stability and ease of use and both have failed to deliver, I'll now just deploy the system as is bare metal like I've had running since UnRaid first got release so many years ago.
  2. I'm trying to get this to work with UnRaid running in a VM in ProxMox. It works fine when I boot UnRaid bare metal, but it doesn't work when within a VM. I think I have the GPU shared properly as lspci -v between ProxMox VM and baremetal appear almost the same. The PCI number and IRQ are different. Maybe someone here can spot a difference. BareMetal: UnRaid 02:00.0 VGA compatible controller: NVIDIA Corporation GP106 [GeForce GTX 1060 3GB] (rev a1) (prog-if 00 [VGA controller]) Subsystem: Gigabyte Technology Co., Ltd GP106 [GeForce GTX 1060 3GB] Flags: bus master, fast devsel, latency 0, IRQ 26, NUMA node 0 Memory at ef000000 (32-bit, non-prefetchable) [size=16M] Memory at c0000000 (64-bit, prefetchable) [size=256M] Memory at d0000000 (64-bit, prefetchable) [size=32M] I/O ports at 5000 [size=128] [virtual] Expansion ROM at 000c0000 [disabled] [size=128K] Capabilities: [60] Power Management version 3 Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+ Capabilities: [78] Express Legacy Endpoint, MSI 00 Capabilities: [100] Virtual Channel Capabilities: [250] Latency Tolerance Reporting Capabilities: [128] Power Budgeting <?> Capabilities: [420] Advanced Error Reporting Capabilities: [600] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?> Capabilities: [900] Secondary PCI Express <?> Kernel driver in use: nvidia Kernel modules: nvidia_drm, nvidia 02:00.1 Audio device: NVIDIA Corporation GP106 High Definition Audio Controller (rev a1) Subsystem: Gigabyte Technology Co., Ltd GP106 High Definition Audio Controller Flags: bus master, fast devsel, latency 0, IRQ 10, NUMA node 0 Memory at f0080000 (32-bit, non-prefetchable) [size=16K] Capabilities: [60] Power Management version 3 Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+ Capabilities: [78] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting ProxMox:UnRaid 01:00.0 VGA compatible controller: NVIDIA Corporation GP106 [GeForce GTX 1060 3GB] (rev a1) (prog-if 00 [VGA controller]) Subsystem: Gigabyte Technology Co., Ltd GP106 [GeForce GTX 1060 3GB] Flags: bus master, fast devsel, latency 0, IRQ 16 Memory at c0000000 (32-bit, non-prefetchable) [size=16M] [virtual] Memory at 800000000 (64-bit, prefetchable) [size=256M] Memory at 810000000 (64-bit, prefetchable) [size=32M] I/O ports at d000 [size=128] [virtual] Expansion ROM at c1020000 [disabled] [size=128K] Capabilities: [60] Power Management version 3 Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+ Capabilities: [78] Express Legacy Endpoint, MSI 00 Capabilities: [100] Virtual Channel Capabilities: [250] Latency Tolerance Reporting Capabilities: [128] Power Budgeting <?> Capabilities: [420] Advanced Error Reporting Capabilities: [600] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?> Kernel driver in use: nvidia Kernel modules: nvidia_drm, nvidia 01:00.1 Audio device: NVIDIA Corporation GP106 High Definition Audio Controller (rev a1) Subsystem: Gigabyte Technology Co., Ltd GP106 High Definition Audio Controller Flags: bus master, fast devsel, latency 0, IRQ 10 Memory at c1000000 (32-bit, non-prefetchable) [size=16K] Capabilities: [60] Power Management version 3 Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+ Capabilities: [78] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting The only thing I can find different is this biggie dmesg | grep GPU [ 271.949382] NVRM: GPU 0000:01:00.0: RmInitAdapter failed! (0x26:0xffff:1133) [ 271.949691] NVRM: GPU 0000:01:00.0: rm_init_adapter failed, device minor number 0 From googling around it sounds like it is 1 of 2 things. Either the GPU is broken (which I know otherwise because it works baremetal), or the NVidia drivers are loaded wrong. Odd here because it is the same drivers, etc. Hoping someone may see something I missed. thanks david
  3. flashed the SAS 3008 to IT mode. Got my SASmini-hd cables. Everything is working on bare metal, SAS, SATA, GPU shared to Plex Docker. I lose GPU sharing when running under ProxMox. I can see the GPU in unRaid but something still isn't quite there because it fails to load. I'll play a couple more days trying to get GPU sharing down, but after that I'll just deploy the bare metal build. I may skip ProxMox even if I get GPU sharing to work as I've not got a lot of confidence in ProxMox for GPU sharing right now. Maybe time to look at HyperV?
  4. Not sure if this will help, but I can currently see the GPU with lspci -v, but for some reason I still get: Aug 20 09:53:53 Tower2 kernel: NVRM: GPU 0000:01:00.0: RmInitAdapter failed! (0x26:0xffff:1133) Aug 20 09:53:53 Tower2 kernel: NVRM: GPU 0000:01:00.0: rm_init_adapter failed, device minor number 0 01:00.0 VGA compatible controller: NVIDIA Corporation GP106 [GeForce GTX 1060 3GB] (rev a1) (prog-if 00 [VGA controller] ) Subsystem: Gigabyte Technology Co., Ltd GP106 [GeForce GTX 1060 3GB] Flags: bus master, fast devsel, latency 0, IRQ 16 Memory at c0000000 (32-bit, non-prefetchable) [size=16M] [virtual] Memory at 800000000 (64-bit, prefetchable) [size=256M] Memory at 810000000 (64-bit, prefetchable) [size=32M] I/O ports at d000 [size=128] [virtual] Expansion ROM at c1020000 [disabled] [size=128K] Capabilities: [60] Power Management version 3 Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+ Capabilities: [78] Express Legacy Endpoint, MSI 00 Capabilities: [100] Virtual Channel Capabilities: [250] Latency Tolerance Reporting Capabilities: [128] Power Budgeting <?> Capabilities: [420] Advanced Error Reporting Capabilities: [600] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?> Kernel driver in use: nvidia Kernel modules: nvidia_drm, nvidia I've searched for this error, but didn't ever find an answer.
  5. Right now you can see that I have used the outer two on one side. I tried using the outer slots, so one dimm per group of slots for each CPU and the system wouldn't boot. I only got it to boot by putting them both on the same side.
  6. Update: the machine works well. I'm just trying to get ProxMox to pass the GPU through to unRaid for Plex. I ran two preclears at the same time and sustained 150MB/s on both of them. I was happy with that. Here is a picture of it on my workbench. I didn't have a front panel laying around so grabbed as spare tower and used it for testing. I'm still not 100% on the memory slot positioning. I know that for dual channel systems they recommend a DIMM for each channel to improve memory bandwidth. The supermicro board manual doesn't provide any details. So do I put them both in blue or one in blue and one in black?
  7. I'm currently trying to get the GPU to pass through. Has anyone done this successfully? I have gotten it working when I boot straight into UnRaid using Plex, but I cannot get it to work with using ProxMox. I followed this guide, but did not include downloading the ROM. Still looking for the right way to do this. thanks
  8. I figured out that I didn't check the UEFI boot mode when I made my unRaid USB drive and that is probably why I couldn't get UEFI to boot and had to revert to SeaBIOS. I'll remake the USB stick and try again. UPDATE: Yes checking the UEFI button in the UnRaid download tool does in fact let me boot in UEFI mode and it boots properly off the USB drive in this manner. thanks
  9. I followed the ProxMox setup thread by Benson here: I had to set the SeaBios option to get it to boot as I had the same issue found here: Where it couldn't find the boot device. I attached my settings below. And Options: I then added in my PCI Sata controller and hooked up two unformated drives. I only added the controller that doesn't control the ProxMox drive. Adding both causes things to hang. Now when I start my VM I can see in the console that it is trying to boot from disk. There is just hangs forever. My boot order is CDROM, none, none. Why is it trying to boot from disk instead of my USB? If I go to the console after starting, and hit ESC for the ProxMox boot menu, I can then select the USB drive. It is number 3 after the two hdd. How do I make this the default?
  10. Thanks for pointers. It was the memory in the wrong slots. Now it boots, can see all memory and everything.
  11. Got my parts in, installed both cpus and heat sinks, memory, graphics card and front panel (power/reset). Powered on and the power on LED lights and the SAS alive LED blinks. No beeps, like you would typically get after POST. No POST. Took the memory out and I get the beep codes that says no memory installed, so something is alive. Hooked it all back up again, still no beep, no POST. ideas? david
  12. That is my goal. I would love to virtualize my pfsense box and unraid probably on ProxMox as that is what I'm most familiar with. It would help fix some networking bottle necks I'm experiencing without having to figure out how to get LAGG working between my switch and pfsense. Beware: if you buy a refurb Corsair power supply from Amazon it will contain only half of the advertised module connectors. buying them on the market makes it cost almost the same as a new power supply with a limited warranty. So I returned the Corsair and bought a new EVGA G2 850. david
  13. Got most my items today. The two cpu coolers, while in the comparison table on Amazon said they were LGA2011v3 compatible weren't. Sending those back and ordered two different ones. Double checked on the manufacturers site to ensure they are compatible. Power supply (Corsair TX850M) showed up, but I need two cpu ATX 4+4 power plugs for this board and there is only one with the power supply. So ordered another plug. Wednesday is the new day for the new parts.
  14. I went with option 3 Option 1: system is not extensible, as my load grows I'm already maxed out Option 2: I was concerned about being able to add a GPU later for transcoding Option 3: 2x E5-2680V3 - I can easily upgrade these in the future as needed to get more passmark Supermicro X10DAC - dual CPU, plenty of PCIE3.0x16 and memory slots, IPMI, on board SAS controller and as it is a workstation board GPU support. I've seen posts on the forum where someone else has used the board. 4x16 registered ECC memory This will go in my existing coolermaster tower case (it supports E-ATX). I'll post pictures when it all shows up in a week or so. david
  15. Been looking more and put together a Xeon based solution. The CPUs are used, but the other bits are new. 2x E5-2680V3 Supermicro X10DRD-it https://www.supermicro.com/en/products/motherboard/X10DRD-iT 4x8GB RDDR4 ECC Memory from Supermicro recommended list Each Xeon is about 10% less that the Ryzen in Passmark This system is way more extensible for future if needed. I could add GPU, I could upgrade the CPUs, more memory, etc I'm not familiar with Supermicro, but it seems to have a good name around here. Anything special I showed be watching out for? thanks david