Jump to content

mechmess

Members
  • Posts

    16
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

mechmess's Achievements

Noob

Noob (1/14)

6

Reputation

  1. I honestly somehow missed that there we were all the way to 6.11.5. Updating now. Just changed the path. I'll report back with my experience in a couple of days. Thanks for the help - my apologies for not getting totally caught up on the current best-practices.
  2. EDIT: I'll try switching to ipvlan to see if that helps anything.... This has been a continuous issue for me for a long time now (I first posted in this thread back in April/May), and my only solution has been to move my Plex container/transcoding to a second system running Unraid 6.8.3. The 6.8.3 system has been solid. With hardware transcoding enabled, I get at most about a day and a half before a crash. Without, I'm rock solid. There are never any related/similar system logs at the time of crash (mirroring to flash and to a remote syslog server), and my system automatically restarts after the crash. i9-9900k 6.11.1 Dummy Plug GPU Top with nothing in the Go file Official PlexInc container Ran Memtest for 24 hours with no errors Appdata on Cache:Prefer, with plenty of available space. Transcoding on the Cache Drive (not in RAM) - thought this might have been the issue so I switched away from RAM tower-diagnostics-20230209-0719.zip
  3. If I dont have GPU TOP installed, I do not get /dev/dri to populate. Is there something else I need to change? I have GPU TOP installed, so maybe that is causing some sort of issue vs just letting it roll default.
  4. I’ll do so. However, my system is stable for weeks on end without the driver loading. I have been re enabling it with every RC release. Could just be a coincidence though! Thanks for taking a look!!
  5. Wanted to wait for another crash before sending. Last crash was ~23:00 on May 9th. Plex transcoding was happening at the time of crash. I have a remote syslog server set up and included the log from that. The last few lines before the crash are: May 9 03:07:54 Tower CA Backup/Restore: Backup / Restore Completed May 9 08:02:33 Tower webGUI: Successful login user root from 192.168.7.153 May 9 08:04:52 Tower kernel: mdcmd (36): check May 9 08:04:52 Tower kernel: md: recovery thread: check P ... May 9 08:05:05 Tower flash_backup: adding task: /usr/local/emhttp/plugins/dynamix.my.servers/scripts/UpdateFlashBackup update May 9 13:13:07 Tower flash_backup: adding task: /usr/local/emhttp/plugins/dynamix.my.servers/scripts/UpdateFlashBackup update May 9 15:51:27 Tower kernel: curl[26998]: segfault at d ip 0000151904658751 sp 00007ffd2fb6afc0 error 4 in ld-2.31.so[15190464e000+23000] May 9 15:51:27 Tower kernel: Code: 68 4c 8b 6c 24 70 49 89 c0 48 8b ac 24 d0 00 00 00 4c 8b a4 24 e8 00 00 00 4c 8b 5c 24 78 0f 1f 00 48 83 bc 24 f8 00 00 00 00 <0f> 84 61 01 00 00 41 0f b6 40 05 83 e0 03 83 e8 01 83 f8 01 0f 86 tower-diagnostics-20220510-0902.zip syslog-192.168.7.224.log
  6. I have a 9700k, running RC7 and added a dummy plug as well to no avail. If I keep the module from loading/disable quicksync in plex my system does not freeze. I can provide any logs that would be of help.
  7. Yes - "sensors" does work, but I dont think its doing the job that the sensors-detect plugin does (I'm manually loading some drivers at boot).
  8. Alright . I Verified NerdPack is up to date Uninstalled perl, rebooted. Verified that perl is now showing up as Installed:no Installed perl, rebooted. Perl now shows up as Installed:yes No go! I still am presented with the error bash: sensors-detect: command not found
  9. I have a system running Unraid 6.10 RC2, and am trying to execute the "sensors-detect" script in the Unraid shell. When I type "sensors-detect" into the shell, I am left with "bash: sensors-detect: command not found" I have installed perl with the nerdpack plugin, and can run it successfully on my other system running 6.9. Am I missing something here? Thanks!
  10. I'm having the exact same issue! @koiril - were you ever able to find a solution? Same as you, I did all the normal steps and eventually tried doing a native SSD install and then passing through the SSD and GPU - the VM always freezes when the drivers are initialized. I *am* able to pass through an Nvidia 710 that I have laying around, but the RX 580 fails, and it's not isolated to windows either - the same thing happens to MacOS and Linux VMs as well. The most frustrating thing for me is that I had it working just fine, but swapped to a different motherboard.
  11. I think there's a bug in the drive temperature monitor thresholds - I am unable to set individual warning or critical temps for drives in my array or the cache. When I click apply, they simply go back to the global setting. This happens with my array both started and stopped, and following a fresh reboot. Changing the temps through the global setting does persist.
  12. I got this error once - deleting the custom_ovmf folder (along with /appdata/Macinabox and the folder in Domains), the scripts, and reinstalling the container did the trick. rm -r /mnt/user/system/custom_ovmf
  13. I've got a really strange bug. I was able to successfully install and boot the VM to the desktop (Big Sur). I then proceeded to add two additional CPUs and my GPU, add the name of the VM to the script, apply the script, and successfully reboot. Following this I then attempted to pass through my USB controller, following the same procedure applying the script after editing the VM, and then attempted to boot - my display then will not initialize and 1 CPU core is pegged to 100%. Removing the USB controller from the VM and rebooting Unraid does not solve the issue. I also attempted to remove the GPU, and the VNC window then says that the guest has not initialized the display. I have deleted everything and started from scratch and the same problem pops up as soon as I pass through the USB controller. The only thing that seems to resolve it is deleting all Macinabox files and starting from scratch. Here below is the log from when I have returned all VM settings to default, with the issue presenting. ErrorWarningSystemArrayLogin -device pcie-root-port,port=0xb,chassis=3,id=pci.3,bus=pcie.0,addr=0x1.0x3 \ -device pcie-root-port,port=0x13,chassis=4,id=pci.4,bus=pcie.0,addr=0x2.0x3 \ -device pcie-root-port,port=0xa,chassis=5,id=pci.5,bus=pcie.0,addr=0x1.0x2 \ -device pcie-pci-bridge,id=pci.6,bus=pci.1,addr=0x0 \ -device ich9-usb-ehci1,id=usb,bus=pcie.0,addr=0x7.0x7 \ -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pcie.0,multifunction=on,addr=0x7 \ -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pcie.0,addr=0x7.0x1 \ -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pcie.0,addr=0x7.0x2 \ -device virtio-serial-pci,id=virtio-serial0,bus=pci.2,addr=0x0 \ -blockdev '{"driver":"file","filename":"/mnt/user/isos/BigSur-opencore.img","node-name":"libvirt-3-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-3-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-3-storage"}' \ -device ide-hd,bus=ide.2,drive=libvirt-3-format,id=sata0-0-2,bootindex=1,write-cache=on \ -blockdev '{"driver":"file","filename":"/mnt/user/isos/BigSur-install.img","node-name":"libvirt-2-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-2-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-2-storage"}' \ -device ide-hd,bus=ide.3,drive=libvirt-2-format,id=sata0-0-3,write-cache=on \ -blockdev '{"driver":"file","filename":"/mnt/user/domains/Macinabox BigSur/macos_disk.img","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \ -device ide-hd,bus=ide.4,drive=libvirt-1-format,id=sata0-0-4,write-cache=on \ -netdev tap,fd=35,id=hostnet0 \ -device e1000-82545em,netdev=hostnet0,id=net0,mac=52:54:00:47:d7:ba,bus=pci.3,addr=0x0 \ -chardev pty,id=charserial0 \ -device isa-serial,chardev=charserial0,id=serial0 \ -chardev socket,id=charchannel0,fd=36,server,nowait \ -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \ -device usb-tablet,id=input0,bus=usb.0,port=1 \ -vnc 0.0.0.0:0,websocket=5700 \ -k en-us \ -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.6,addr=0x1 \ -device virtio-balloon-pci,id=balloon0,bus=pci.4,addr=0x0 \ -usb \ -device usb-kbd,bus=usb-bus.0 \ -device '************************' \ -smbios type=2 \ -cpu Penryn,kvm=on,vendor=GenuineIntel,+invtsc,vmware-cpuid-freq=on,+pcid,+ssse3,+sse4.2,+popcnt,+avx,+aes,+xsave,+xsaveopt,check \ -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ -msg timestamp=on 2020-12-12 16:22:36.410+0000: Domain id=9 is tainted: high-privileges 2020-12-12 16:22:36.410+0000: Domain id=9 is tainted: custom-argv 2020-12-12 16:22:36.410+0000: Domain id=9 is tainted: host-cpu char device redirected to /dev/pts/0 (label charserial0)
  14. Thanks! For taking a look!!! I successfully used the Macinabox docker to boot a Catalina install there. I copied that installer img over to the new VM folder. I also tried brining over the boot disk to see if it was just the installer file - neither one shows up. I give this a try!! I was curious if there might be something goofy going on in the config.plist, but wasn't sure where to look. This I had a handle on - I was focusing on getting the installer disk to work first, before passing through the system disk. As an aside - is there any reason why in the VM world we don't put the OpenCore boot loader on the system disk EFI partition? I've been using OC on my actual 2010 Mac Pro to get a graphical boot screen and I know that's where it is installed on that machine.
×
×
  • Create New...