Jump to content

NOLA_DireWolff

Members
  • Content Count

    34
  • Joined

  • Last visited

Community Reputation

1 Neutral

About NOLA_DireWolff

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. For anyone who finds this - I ended up having to manually add the config file by pulling the generic config from githup and manually creating it. There was nowhere I could find the config that was being used. https://github.com/influxdata/influxdb/blob/1.8/etc/config.sample.toml I copied that into a text editor and saved it into the root influx appdata dir.
  2. For sale is a used Asus WS C246 Pro Motherboard. This retails for ~$250 before shipping - For sale for $150 USD shipping included to USA Note the #2 PCIe 1X slot is physically damaged and has been sealed over. All other functions are OK. Please see the photos to ensure you don't need that slot. We can combine shipping with the processor I have for sale if you like. They are compatible and ran my unraid system very well together. I will consider partial value trade ($65 credit) in trade for a LSI SAS9300-8i. I have this currently listed on Ebay. We can exchange a message through there, if you like, to verify my 800+ positive feedback rating and to confirm this is legit. Enjoy! Payment via Paypal or Venmo OK. Thanks for looking.
  3. This is clean and from home use. Non-smoking. No defects. Factory reset. Free USPS Priority Shipping to USA - Global shipments pay shipping fee for USPS Priority Small Flat Rate Box. Unifi Cloud Key Gen 2 UCK-G2 $130 USD - Insured shipping to USA. I will consider partial value trade ($65 credit) in trade for a LSI SAS9300-8i. I have this currently listed on Ebay. We can exchange a message through there, if you like, to verify my 800+ positive feedback rating and to confirm this is legit. Enjoy! Payment via Paypal or Venmo OK.
  4. For sale is a 9 month used Intel Xeon E-2176G LGA 1151 processor. CPU only. Removed from service 12/2019. Photos taken after removal. No issues. $310USD - Free insured USPS priority shipping to USA. I will beat the lowest publicly advertised in stock price you can find. Please feel free to send me a link and an offer. Thanks for looking! https://ark.intel.com/content/www/us/en/ark/products/134860/intel-xeon-e-2176g-processor-12m-cache-up-to-4-70-ghz.html I have this currently listed on Ebay. We can exchange a message through there, if you like, to verify my 800+ positive feedback rating and to confirm this is legit. Enjoy! Free USPS priority shipping to USA. Global shipment payed by buyer - will be shipped in USPS, small flat rate box. Payment via Paypal or Venmo OK.
  5. @matthope I hope you wouldn't mind shedding a bit of your wisdom on this subject. My target is to passthrough my iGPU HDMI with HDMI audio to a Debian Buster VM. I can pass through the HDMI - a lspci and alsa testing shows no sound options if only the 00:02 is passed through. It needs the other piece. Which HDMI output are you using? My real goal is multichannel/highres HDMI audio for that VM. I read this whole thread you helped in, and this one - Various postings seem to imply I should be successful. I must be missing something I've tried in UEFI and Legacy. I'm currently in Legacy. I expected to see the sound portion as a PCI device here: but there are none. It does show in the sound card drop down now once the edit was made to the boot config - and it shows that it is on virtio drivers: Unfortunately - if I try and start it, I'll get something like this (This is a seaBios/i440fx test): "internal error: qemu unexpectedly closed the monitor: 2020-03-09T01:36:07.629935Z qemu-system-x86_64: -device vfio-pci,host=0000:00:1f.3,id=hostdev1,bus=pci.0,addr=0x5: vfio 0000:00:1f.3: group 14 is not viable Please ensure all devices within the iommu_group are bound to their vfio bus driver." I followed your instructions and this is my current status: IOMMU group 3: [8086:3e9a] 00:02.0 VGA compatible controller: Intel Corporation Device 3e9a (rev 02) IOMMU group 14: [8086:a309] 00:1f.0 ISA bridge: Intel Corporation Cannon Point-LP LPC Controller (rev 10) [8086:a348] 00:1f.3 Audio device: Intel Corporation Cannon Lake PCH cAVS (rev 10) [8086:a323] 00:1f.4 SMBus: Intel Corporation Cannon Lake PCH SMBus Controller (rev 10) [8086:a324] 00:1f.5 Serial bus controller [0c80]: Intel Corporation Cannon Lake PCH SPI Controller (rev 10) [8086:15bb] 00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (7) I219-LM (rev 10) and label Unraid OS menu default kernel /bzimage append pcie_acs_override=downstream vfio-pci.ids=8086:a348 modprobe.blacklist=i2c_i801,i2c_smbus isolcpus=5-7,13-15 initrd=/bzroot and 00:1f.0 ISA bridge: Intel Corporation Cannon Point-LP LPC Controller (rev 10) DeviceName: Onboard - Other Subsystem: ASUSTeK Computer Inc. Device 8694 00:1f.3 Audio device: Intel Corporation Cannon Lake PCH cAVS (rev 10) DeviceName: Onboard - Sound Subsystem: ASUSTeK Computer Inc. Device 8777 Kernel driver in use: vfio-pci 00:1f.4 SMBus: Intel Corporation Cannon Lake PCH SMBus Controller (rev 10) DeviceName: Onboard - Other Subsystem: ASUSTeK Computer Inc. Device 8694 Kernel modules: i2c_i801 00:1f.5 Serial bus controller [0c80]: Intel Corporation Cannon Lake PCH SPI Controller (rev 10) DeviceName: Onboard - Other Subsystem: ASUSTeK Computer Inc. Device 8694 00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (7) I219-LM (rev 10) DeviceName: Onboard - Ethernet Subsystem: ASUSTeK Computer Inc. Ethernet Connection (7) I219-LM Kernel driver in use: e1000e Kernel modules: e1000e and <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x00' slot='0x1f' function='0x3'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev> Any ideas for what to try next?
  6. As an aside note - on the Unraid Dashboard my eth1 shows only 60Mbps down and 1.5Mbps up. That is all CCTV traffic. I'll happily take guidance on the best network setup for my use case
  7. 16 hours of Memtest on 64GB of ECC - 1.32x passes and 0 errors. Memtest is satisfied. CPU was at 46-47 degrees the whole time. I don't know 😭what to test next. During the prior crashes the only things running were Plex Official docker and ShinobiCCTV docker. Plex I have had for a year. Shinobi has been 3-4 months - there is slight correlation in timing. Shinobi shows very low resource utilization. Question on Network simplification. I have eth0 and eht1 - separated, each is allowed to bridge. eth0 is my primary server interface and all of my usual dockers use that. eth1 is dedicated to my cctv cameras, plugs into a different switch which has the majority of my hardwired traffic (same whole network, different VLAN). The Shinobi docker is given an IP address on that VLAN (eth1) which I can reach when I wireguard in remotely via eth0 Am I being too cautious about bandwidth with the motivating being to isolate the majority of my camera traffic on eth1 like this? Would it be better to bond them, plug them both into my network? My network is only 1000. I honestly don't know the best way.
  8. @ testdasi I'll add that to my experiments list Thank you for the idea. I finished more reading today about the way the xml can be setup to plug devices into the guest PCIe bus in different manners/topology. I'll report back if I find any good solutions or have a new clue.
  9. @jonathanm I'm sorry if it was confusing. I've been working on this since November. I'm not the ask for help first type.... Volumio is a custom Debian image. It is a great piece of software. They offer it as a .img file for running on dedicated hardware and small processors such as Rpi and others. They offer an x86 version, but it is intended to boot from flash and be the only thing running. I have Nvidia GPU passthrough working on my windows VM in my unraid machine - it works spendidly. No issues. Both of my currently installed video cards pass through perfectly. When I'm gaming on my Windows VM - HDMI video + audio + surround sound is perfect! *Neither one passes through to the Volumio OS well* Some cards are no video, some are no audio, none have audio devices that are auto discovered by the Volumio OS. I have to edit config files, they don't just run and it doesn't persist past shutdown. Normal users don't have this issue. I could buy their hardware, or a rpi or go the easy route - but I wish to teach this Unraid box new tricks. Unraid is awesome - (its me that needs the new tricks). I want to run the volumio x86 image as a VM in my unraid machine and passthrough my weaker video card to be a HDMI audio output from my unraid machine. *I have not been able to figure out why I can't do this* Likely it has to do with the intent and focus of the Volumio developers - real hardware - and I don't fault them for that at all. I want to overcome that hurdle, help myself, help Volumio and help other unraid users who might want the same sort of little VM. This software offers a bit perfect player, internet radio, streaming, room sync and a great web UI interface. Their image DOES boot perfectly as a VM with VNC as the primary display. I just can't get it to boot and run well with a video card as the primary display and the HDMI as the audio output. The questions: 1 - Can the VM be loaded and modified via Nvidia driver install (I cant successfully pull this off) or other internal tweaks? 2 - I have tried all of the OVMF/seaBIOS/Q35/i440 variations - some of them cause different results but none of them work. Does this mean I'm missing a key setup for VM to a custom Debian Buster? 3 - I know very little about docker - would this run better as a docker container based on their git source? Is that possible? After months of headache, hardware purchases (I have 4 different chipset video cards now : / ) and various Linux experiments I'm hoping one of our resident geniuses has a bright idea Possibly you... or @SpaceInvaderOne
  10. I've spent a few hundred on hardware, a good 30+ hours on troubleshooting and reloading this VM over and over with various configurations and iterations. I've tried installing headers, installing legacy and modern Nvidia drivers and changing config files. My linux knowledge is limited, but I'm good at research and variable based testing to solve problems. I'm stumped. I would bet that some of you would enjoy hifi bitperfect audio via HDMI out of your Unraid machine. What could I be missing? Would you be interested in helping to get this running stable as a VM? What does it take to turn it into a docker that we can share? This is called "open source" does that mean everything we would need to make a docker container for it lives in git? https://github.com/volumio I'm using the latest 007 image from here - https://forum.volumio.org/volumio-x86-debian-buster-debugging-party-beta-t13957.html I could not get it working with the stable release - that one is still on Debian Jessie and has very old kernel files - maybe it can work and I just don't understand? https://volumio.org/get-started/ With some cards I have video, some I don't. Not all will boot, I can't successfully install Nvidia drivers in that Debian build - the build doesn't automatically find the proper audio device, ever. It is easy to say "thats because the build is not done well", the kernel is old, or other excuses. I'm looking for a real solution, not to blame. This has been a bit of a personal challenge, but I feel like I'm reaching the end of my linux experience. I'm definitely game to research, help test and further assist the development of this. I've posted in the reddit VM forums, a bit here in Unraid and a lot in the official Volumio forums. They are not interested in working on VMs, yet, so I'm on my own. Thanks for your consideration
  11. There have been no hardware changes prior to the first three crashes. Between those and this recent one, I've change MOBO and CPU... same crashes. I'll run a memtest tonight and report back.
  12. These are from last night... it was found unresponsive this morning. This is the crash prior: An entry I'm curious about - but nothing bad happened.
  13. This is continuing. 2 hard crashes this week. No new dockers/vms or hardware. Can anyone help with troubleshooting? Prior to 6.8.2 with this hardware and docker setup I had no crashes ever I got rid of deluge and there was no file sharing between the last post and this one. I'll post syslog data in the next post.