JackSafari

Members
  • Posts

    69
  • Joined

  • Last visited

Everything posted by JackSafari

  1. The best that I can determine is that the cache drive dropped to no free space and that caused configuration corruption with the only active VM (Windows). I was able to create new VMs using the existing vDisks and so far everything is back to normal.
  2. Is there a way to recover missing VMs configurations? The VM drives still exist I have no idea what could have gone wrong. The VMs that are now suddenly missing were not running when I rebooted unraid. There appears to be nothing wrong with unraid. The three remaining VMs work without error. EDIT: I did notice that the 2TB cache drive had dropped to 16MBs tower-diagnostics-20230802-1104.zip tower-syslog-20230802-1803.zip
  3. Thanks... That explains things. It has been there so long I completely forgot when it got installed. During the install of 6.12.2 there was a notification that a tool pack was removed, but it appears it the USB Tab was not removed. EDIT: I have removed Nerd pack tools from plugins. This removed "USB" from the menu tab.
  4. Below is all that I get for the USB tab. All other tabs work as expected. Tried - rebooting unraid - access unraid from other computers and devices (Windows 11, iPhone, iPad) - clear browser cache. - Waited 1+ hours for the page to load. - chrome and edge usbtabpage.txt
  5. Currently my windows 10 VM works nearly 100% except I can't get Bluetooth enabled. Not sure what config steps need to be taken in unraid so that the Windows VM can see access the Bluetooth hardware. Windows Device manager is reporting a driver error, but I can't find Bluetooth errors in the system event log. The VM has Bluetooth selected tower-diagnostics-20230712-0226.zip tower-syslog-20230712-0924.zip
  6. After 24 hours no further lockups. It appears reinstalling virtio drivers and\or switching Q35-7.1 has solved the problem.
  7. Here is most recent diagnostic. tower-diagnostics-20230620-1402.zip Edit-1: In effort to solve the problem, today I reinstalled the Virtio drivers and switched from i440fx-7.1 to Q35-7.1. Edit-2: It has been a couple of hours, the Windows 10 vm remains stable. I have done some moderate stress testing that has previously triggered the lockup (screen is frozen, vm is 'paused' by unraid and cannot be unpaused). I am going to let the Windows 10 vm run for at least 24 hours without any further config changes or reboots.
  8. Unfortunately making the suggested changes did not resolve the problem. Overnight while sitting idle the vm locked up again while it was not actively being used; the desktop clock froze at 8:06am. At that point the VM had been running OK for over 12 hours and idle for about 6 hours. Any further suggestion to try?
  9. Yes, it is likely that is the cause the problem because it was the only change to the VM's config that I have made recently. I needed to do an extra step to get unraid to accept the <memoryBacking> change back to nosharepages, I needed to remove: <filesystem type='mount' accessmode='passthrough'> .... </filesystem> Otherwise, unraid rejects the change as a configuration error. is it possible to config virtiofs to work with a windows 10\11 VM that has various other hardware passthroughs? (ie GPU).
  10. I am having a similar or exact same thing occurring on a stable VM that has been running for over 1 year. I am still looking to find a solution. I posted my details here.
  11. I am have the exact same issue. My Windows 10 (version 22H2, build 19045.3086) system ends exactly as the log posted above. The problem recently started for reasons unknown. I also have GPU pass through. The VM has been very stable for over a year. This is the first time I have experienced a fatal VM problem on unraid. Observations: Unraid "pauses" the vm, not stops it. "Resume" does not resume the vm. "Force Stop" kills the vm, but starting the vm causes the vm to be highly unstable and quickly unraid pauses the vm within 1-2 minutes. The only way I am able to clear the problem temporarily is to restart unraid. After start, I can run the VM again and it will be stable for a limited period of time; sometimes many hours. The problem with the VM is more likely to occur during video streaming, such as watching YT videos. This is where the problem appears to be happening. Unraid's Windows 10 vm log: -device '{"driver":"vfio-pci","host":"0000:2d:00.0","id":"hostdev0","bus":"pci.0","addr":"0x8","romfile":"/mnt/user/isos/vbios/GTX1060.rom"}' \ -device '{"driver":"vfio-pci","host":"0000:2d:00.1","id":"hostdev1","bus":"pci.0","addr":"0x9"}' \ -device '{"driver":"usb-host","hostdevice":"/dev/bus/usb/001/005","id":"hostdev2","bus":"usb.0","port":"3"}' \ -device '{"driver":"usb-host","hostdevice":"/dev/bus/usb/001/006","id":"hostdev3","bus":"usb.0","port":"4"}' \ -device '{"driver":"usb-host","hostdevice":"/dev/bus/usb/003/002","id":"hostdev4","bus":"usb.0","port":"5"}' \ -device '{"driver":"usb-host","hostdevice":"/dev/bus/usb/001/004","id":"hostdev5","bus":"usb.0","port":"6"}' \ -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ -msg timestamp=on char device redirected to /dev/pts/0 (label charserial0) 2023-06-19T08:10:19.827059Z qemu-system-x86_64: libusb_set_interface_alt_setting: -5 [NOT_FOUND] .... This is how the log ends every time the unraid forces the WIndows 10 to pause. tower-diagnostics-20230619-0137.zip tower-syslog-20230619-0825.zip WindowsLog.txt
  12. Suggestion\Request. Make this more obvious in some manner. Not a complaint feedback. It took a bit longer than expected because the small "+" icon was off to the left side and only appears when the VM is shutdown. I was running the VM at the time, and was looking for how to add an additional drive, and I could not find the option. I understand it can only be added once the VM is shut down, but while running it there are no visual clues\hints. It help others in the future.
  13. Thanks. You are quick to reply. I should done some research before posting the question. Chat AI is great for answering such questions. AI has already solved a few unraid tech problems.
  14. Thanks. I logged on with a new account because I did not have access through my regular account at that time. I ended up rebuilding the VM settings from scratch. The Windows 10 vm drives still existed and were not damaged. It turned out to be a good thing because for reasons unknown, the previous settings were causing problems that caused Windows 10 vm to hang for 40 minutes when starting the VM. I think it might of been because I mapped one of Window's document folders to an unraid share, and Windows didn't like the mapped location for some reason and hung for 40 minutes.
  15. What is the significance of this error\warning? Such as what impact does it have on unraid that they are not loaded? I am aware that I can temporally fix it with the following unraid shell commands, but the message returns after the server is rebooted. modprobe usbip_host modprobe vhci_hcd
  16. After 12 hours it had only transferred 500GB with an estimated finishing time of over three weeks. I decided to rebooted the server which reset the re-build, but the reboot improved the transfer rate dramatically. I have no idea what was slowing down the transfer rate, but after reboot I stopped all docker containers and did not start any VMs.
  17. What could be causing such slow transfer rate during a data-rebuild (HDD upgrade from 4TB drive to 12TB drive)? When the data-rebuild was started, it was showing 180MB/sec transfer rate, but it quickly dropped to under 20mb/sec, and dropping as low as 3MB/sec. As noted below, it will take days, not hours, to rebuild the drive. Upgrade drive Seagate Exos X18 12TB 7200 RPM 512e/4Kn SATA 6Gb/s 256MB Cache 3.5-Inch Enterprise HDD (ST12000NM000J) tower-syslog-20230523-0824.zip tower-diagnostics-20230523-0122.zip
  18. I am able to boot Windows Tiny-11 setup iso and start the Windows's setup, but it does fails to find any drive to install WIndows. It appears to be missing a device driver to dected the VM disk I created using unraid (Add VM) Where can I find the required device driver?
  19. Thanks for this thread. I am about to upgrade a similar hardware configuration. I shall be going from 4TB drive config to a 12TB config. Currently I have 2x4TB parity drives, 3x4TB data drives. Existing array is 80% full out of 12TB total. I will be upgrading to 2x12TB parity drives, 1x12TB data drive (new), 3x4TB data drives(existing array). My plan is to replace both parity drives one drive at a time, then add the new 12TB data drive to the existing array of 3x4TB drives. The config also has 1TB NVMe cache drive, 64gb ram. 5600 6 core AMD CPU If there is any further information I need to know, please advise.
  20. Currently I have a 4TB drives unraid configuration. If I were to step up, what HDD size would be recommended? The use cases is home data and Plex media that currently is about 11TB in size, growing at 1-2 TB per year. I have a total capacity of 12 TB (1x4TB parity, 3x4TB data, 1x1TB NVMe cache) I don't want to go too large and over pay for storage that will go unused for at least a couple of years. I could go to 8TB drives, but I am wondering it would be cost effective to jump to 12TB drives. Starting with 1x12TB parity, and 1x12TB data plus some of the existing 4TB HDDs. I have already had one data drive fail, and unraid has proven to work as promised.; the system continues to work without interruption as if there has not been an HDD failure.
  21. Yea, that was first place I looked and nothing obvious was showing up. However, I did isolate the problem to my login profile. It was taking me upto 40 minutes to logout and about the same to login. When I switched profiles to a newly created account, there wasn't any problem. It is likely related to the fact that I had moved my user profile to another drive other than the "C:" drive. Technically that should not be a problem, but I learned many years ago not to alter Windows default system settings unless there is good reason. It remains unknown why this problem is isolated to logon\logoff, and have no problems using Windows 10 once logged on.
  22. I have been having this problem for several months, when I start\reboot a Windows 10 VM, the VM hangs for 22 minutes after I enter login pass key. After 22 minutes the desktop appears and windows works without any further problems. Is there anywhere in Unraid or Windows where I can look to help debug what is causing the Windows 10 VM to hang for 22 minutes? Typically this is related to a driver hanging, but I have not been able to track down any failed driver because everything works fine after the Windows desktop appears. tower-diagnostics-20221207-0043.zip
  23. The only difference is that new 5600G has graphics onboard the CPU; everything else is the same between the CPUs. No other hardware\software is being changed or upgraded. Is there anything I need to know in advance or documentation I should read before changing the CPU? The objective is to configure unraid to display video using onboard video on the back of motherboard. Currently I have an NVIDIA GTX 1060 card install and everything works as expect. By using the onboard video for unraid, it will free up the NVIDIA card to be used by a VM.
  24. I am just curious why Slackware was selected for unraid. No implied judgement. Its been around since the early 90s and at the time Slackware was the first Linux distro I heard about other than Redhat.