joelones

Members
  • Posts

    532
  • Joined

  • Last visited

Everything posted by joelones

  1. I'm still on 6.12.3 and would like to upgrade but facing multiple issues upon trying 6.12.6. Any guidance would be appreciated. The Intel GPU CometLake-S GT2 [UHD Graphics 630] fails to load upon bootup thus the Jellyfin docker fails to start as it is used for gpu transcoding My Windows 10 VM with a pass-through nvdia quadro fails to start All is fine on 6.12.3. Thoughts? clarabell-diagnostics-20231206-1504.zip
  2. I'm trying to the new UniFi Network Application container and getting a tomcat 404 error using a custom bridge setup. Previous container works well with this setup. Can anyone please advise (unraid 6.12.3)
  3. Just tried it again today, same issue same call trace
  4. No I reverted back to 6.12.3 which is working fine. I would add this as the following: mkdir -p /boot/config/modprobe.d echo "options i915 enable_fbc=1 enable_guc=3" > /boot/config/modprobe.d/i915.conf mkdir -p /boot/config/modprobe.d echo "options i915 enable_dc=0" > /boot/config/modprobe.d/i915.conf what does this do? for 6.12.3, no need to add the modprobe lines, it just works. odd
  5. Apparently `/dev/dri/*` which I pass into the jellyfin docker does not exist. root@clarabell:/boot/config# intel_gpu_top -L No GPU devices found Tried this, but modprobe just hangs: mkdir -p /boot/config/modprobe.d echo "options i915 enable_fbc=1 enable_guc=3" > /boot/config/modprobe.d/i915.conf Here is the diagnostics: root@clarabell:~# lsmod | grep intel intel_rapl_msr 16384 0 intel_rapl_common 24576 1 intel_rapl_msr intel_powerclamp 16384 0 kvm_intel 282624 4 iosf_mbi 20480 2 i915,intel_rapl_common kvm 983040 1 kvm_intel crc32c_intel 24576 2 ghash_clmulni_intel 16384 0 aesni_intel 393216 0 crypto_simd 16384 1 aesni_intel cryptd 24576 2 crypto_simd,ghash_clmulni_intel intel_cstate 20480 0 intel_gtt 24576 1 i915 agpgart 40960 2 intel_gtt,ttm intel_uncore 200704 0 intel_pmc_core 49152 0 clarabell-diagnostics-20230912-1149.zip
  6. Are users having problems loading the intel quick sync drivers with 6.12.4? I seem to have to revert back to .3 to get intel quick sync working in my container.
  7. Just install 6.12.4 and can't start jellyfin due to what i think is missing intel drivers??!! Is there a package app for this now with 6.12.4? .3 didn't have this problem
  8. Seems like a power cycle brought back the nic's activities LEDs, not sure what happened. could be I'm on an old (currently overloaded) UPS so maybe power issues, i hope not hardware.
  9. Here's a picture of the nics, the right one is passthroughed to pfsense and the left one is all of a sudden giving me a problem.
  10. you're right, didn't help. here's my zip. did my quad nic just die? i see activity on the quad leds, only the green led is on. i still have the onboard nic which it now thinks is eth0...
  11. Before doing that, I only see a network-rules.cfg.old and don't see another. The file seems to have the valid network settings. I copied it to network-rules.cfg and will try a reboot
  12. Can someone please help!!! Having a major crisis. Just updated to 6.12.4 and the settings for my quad nic are gone, I can no longer see the other eth1-3 interfaces Tried downgrading to 6.12.3 with the same issue Please help
  13. I'm currenlty using a second Intel Quad NIC to allocate separate VLANs to a couple of my dockers. Basically port 1 is untagged (host), and port 2 and 3 are configured as separate VLANs (attached image) and I use these bridges for certain dockers. I believe I did because I had problems with inter-docker networking with other dockers bridged (to the host) and to the same parent interface. Could I do away with this NIC and use the onboard Intel NIC on the motherboard and set up VLAN using a parent interface but will I have MacVLAN problems like inter-docker communication with the unRAID box and VLANs?
  14. Hello, Although hardware related, I'm posting my question here as I will need to upgrade on older AMD system to an Intel 11th gen based system and I'm not sure how easy it is to migrate to a new motherboard + CPU combo (while keeping the disks and HBA). Will unRAID be smart to work with the existing configuration? Requirements: As many PCI-E slots as possible without going to server motherboards. GPU, HBA, and possibly two Intel Quad NICs. One for a VM and one for VLANs allocated to dockers. Run a maximum of two VMs; one for pfSense (passing in an Intel Quad NIC), and another VM for a basic Windows box for slicing models for my 3D printers. Intel iGPU using quicksync for Jellyfin hardware acceleration This is what I have come up with. Thoughts? Proposed hardware: ASUS Z590-A Prime 11700k or 11600k or even i7-10700 G.SKILL Ripjaws V Series 32GB (2 x 16GB) 288-Pin PC RAM DDR4 3200 (PC4 25600) I already have a Rosewill RSV-L4500U server case but I take it I will need a new low profile heatsink for the CPU? The one I'm using for the old AMD 8350 surely might not work, right? Corsair TX750 from existing box (assuming this will still work?) Questions: Hardware specs seem ok for what I need? Is migrating to a new motherboard + CPU combo easy? I'm using a second Intel Quad NIC to allocate separate VLANs to a couple of my dockers. Could I do away with a second NIC and use the onboard Intel NIC on the motherboard and set up VLANs? Will have MacVLAN problems like inter-docker communication with the unRAID box and VLANs?
  15. Hi, I'm just curious if anyone with strong networking skills can explain to me the following setup - why it works and why it doesn't. I have a quad nic: 1 port (untagged 3.x subnet vlan1) and second port (vlan 10) as below both connected to an old cisco switch: My docker network as follows: Concerning my question, as I said eth1 is connected to a cisco switch and if I select the port mode to access and allow vlan 10, I am not able to ping anything on br1.10. However, if I set it as a trunk port to allow vlan 1 and vlan 10 - it works. I would not expect to set it to a trunk port and I clearly only want vlan 10 traffic flowing. Perhaps I'm confused or something.. Thank
  16. Thoughts here? I've upgrade from Kypton to Leia and cannot reliably get the kodi headless to update the db via json rpc VideoLibrary.scan method anymore. I have the advancedsettings.xml that was posted earlier in this thread and a passwords.xml that has credential information for the smb path/username/password. From what I remember the initial sources were configure with another client and then subsequent updates commands would work with this headless docker, it no longer works. I just setup all my other clients to Leia and noticed that I had to change the smb max version to v3 to get them to access unRAID smb shares. Like so: <setting id="smb.maxprotocol">3</setting> <setting id="smb.minprotocol" default="true">0</setting> Of course this was done via the GUI. Not sure if this has to be on this headless as well. I know it's in guisettings.xml but every time I try to change to the above on headless it reverts back to: <setting id="smb.maxprotocol" default="true">3</setting> <setting id="smb.minprotocol" default="true">0</setting> Even if I stop the docker, edit and restart. Odd...
  17. I need to pass the following device to a Hass OS VM (Home Assistant): "Texas Instruments TI CC2531 USB CDC (0451:16a8)" The device is a USB CC2531 dongle flashed with zigbee2mqtt firmware which the VM sees as /dev/ttyACM0. Without the dongle passed the qemu-system-x86 process for this VM consumes approx 25% cpu, when attached to the VM the cpu rises to 150%. I really don't have any pertinenent logs or any. I'm on 6.8 stable and the VM is using Q35-2.11. Any advice thoughts here will be greatly appreciated.
  18. @bastl So I updated to 6.8 stable and decided to try this workaround. I did try the Skylake emulation for my AMD FX8320 and it didn't quite seem to like it very much and gave an unsupported CPU error when I tried to start the VM. I guess my CPU is either too old or lacks the instructions to emulate Skylake properly. Maybe I need to model an older Intel CPU, like Sandybridge or something?? I know my model is a Opteron_G5. I had no choice but to opt for Emulated QEMU64 mode, hopefully the lack of AES-NI won't impact overall CPU performance with respect to VPN usage. EDIT: I seem to have gotten pfSense to boot with AES-NI on my AMD wit this: <cpu mode='custom' match='exact' check='full'> <model fallback='forbid'>Opteron_G5</model> <vendor>AMD</vendor> <feature policy='require' name='vme'/> <feature policy='require' name='x2apic'/> <feature policy='require' name='tsc-deadline'/> <feature policy='require' name='hypervisor'/> <feature policy='require' name='arat'/> <feature policy='require' name='tsc_adjust'/> <feature policy='require' name='bmi1'/> <feature policy='require' name='mmxext'/> <feature policy='require' name='fxsr_opt'/> <feature policy='require' name='cmp_legacy'/> <feature policy='require' name='cr8legacy'/> <feature policy='require' name='osvw'/> <feature policy='disable' name='rdtscp'/> <feature policy='disable' name='svm'/> </cpu>
  19. Ok thanks. I may just wait for 6.9 at this point knowing that we'll have to upgrade back to the v5 kernel anyway. Hopefully the GSO bug is squashed as well, but this is definitely a viable option at this point thanks to your testing.
  20. @bastl well done! I'm assuming the "Intel Skylake CPU" option is the way to go to keep AES-NI compatibility? Any downside to that? I'm on a rather old AMD FX8320 CPU.
  21. I went from rc7 to rc9 and my pfSense VM does not boot. Had the "GSO" type bug prior. Passing in a Intel Quad NIC to my pfSense VM. Thoughts here? Guess it's a known thing:
  22. I've upgrade to v6.8rc6 and I see a question mark icon for a custom docker, can you please elaborate how to set it? I have an icon file in the folder: /boot/config/plugins/dockerMan/images as 'custom-program_name-latest-icon.png' matching the local repository/image of 'custom/program_name'... EDIT: never mind, pointed the icon url to a local php server...
  23. Thanks for your help, and yeah I do have a pfSense backup (physical) box so that's not the problem in that regard. More so, it's everything else that runs on unRAID (dockers, etc...) that's the problem, and fear that'll I hose something in the process. But I do backup the flash before proceeding so in every case I tried, I was able to revert back to 6.5.3. EDIT: Solved this by replacing an older PCI dual-nic card with a PCI-E quad nic and removing my Radeon HD 6450 PCI-E for a PCI rage cheapo GPU for basic console.
  24. I mean, it appeared to be the case. But honestly, I did not run it for too long without bringing up the pfSense VM, so I cannot be 100% certain that it doesn't freeze with pfSense off. But it has been my experience that it tends to freeze almost in a short amount time once pfSense starts. Gonna probably pull out the legacy PCI NIC and get another PCI-E Quad NIC and use it instead. Perhaps something changes. Although I'm gonna have to pull out the dGPU and hope that this mobo boots with no GPU.