Mr.Will

Members
  • Posts

    27
  • Joined

  • Last visited

Everything posted by Mr.Will

  1. Ah, I didn't know that, but it's a good "feature". Do you know roughly how long it takes to crash? And nevertheless do you know if the old password is always used (stored in memory) until you reboot with the USB's password changed? Thanks!
  2. Consider this scenario: - unraid has encrypted disks that require password at boot. I turn on Unraid, unlock the drives and leave the house to work/holidays. - Array is started but unraid requires a password for web UI, console and shares, like any normal safe NAS - a thief breaks in He can't access the shares since they all require a password. He can't access the console since the monitor asks for a password. If he unplugs the USB with unraid running, resets the root password in the USB file and then puts the usb back in while unraid is still running: will unraid check the user input password against the old password stored in memory or will it check it against the one in the USB file? Consider the thief doesn't reboot. If it uses the password in memory we are safe, but if it uses the password in the USB then they can easily access the webUI and check all information even in encrypted disks since the array was running Thanks
  3. Did you get this solved? I'm facing a similar issue. Thanks!
  4. @dcoulson Did you find out your issue? I have the same problem since probably end of last year (2022) and have been trying to fix it for several months. Everything was working perfectly up to a certain point where the VM started crashing when I try to play any video game. It runs OK for a few minutes, or it sometimes doesn't even crash, but usually it does. When it crashes I see the following error in Unraid: vfio-pci 0000:01:00.0: vfio_bar_restore: reset recovery - restoring BARs I had Windows 10 when this started happening, and tried migrating to Windows 11, which did not solve it. These are the things I tried: - Change XMP RAM profile in the BIOS - Lower game graphis config - Update nvidia driver - Downgrade nvidia driver to 522.25 and 526.86 - Run games as administrator - Run games as DW11 only - Check that CPU is in performance profile in Windows and Unraid - Remove any software that has overlays (Razer and so on) - Update Windows - Enable MSI interrupts under Windows - Use a vBIOS in the VM config - Leave the case open (in case it was a tempearature issue). Also monitored the temps during play None of these works. My suspicion is this has to be an iossue with the nvidia drivers. I had not change anything in the VM as far as I remember when this started happening. The only thing that likely changed was the driver but a downgrade to previous driver didn't work. So I'm fully stuck. Any ideas?
  5. It seems we crossed posts Yes, that's basically what I ended up doing, altough I add the CPU thing and will require a little more cleanup. But other than that I believe it's the same process
  6. I just tried the propper hooks method (qemu.d/myhook) and it works pretty well. It didn't at first, but I just needed to change the line where I was killing the nvidia-persistenced process. So now all I have to do is create a scheduled task using "User Scripts" plugin that run at boot (first start only) and just create that file. Not sure it is worth creating a plugin for this, since it is pretty straightforward. Almost anyone can do that. My current script code is based on @SimonF's plugin file: #!/bin/bash QEMUDFILE=/etc/libvirt/hooks/qemu.d/vm_nvidia_pers_enabler # Check qemu.d exists if not create. [ ! -d "/etc/libvirt/hooks/qemu.d" ] && mkdir /etc/libvirt/hooks/qemu.d # Create vm_nvidia_pers_enabler File. cat << EOF > $QEMUDFILE #!/usr/bin/env php <?php #begin vm_nvidia_pers_enabler if (\$argv[1] == 'Windows 10' && \$argv[2] == 'prepare' && \$argv[3] == 'begin'){ shell_exec('date +"%b %d %H:%M:%S libvirt hook: VM just turned ON. Disabling Nvidia persistence mode on host" >> /var/log/syslog'); #shell_exec('kill $(pidof nvidia-persistenced) &'); #sleep(1); shell_exec('cpufreq-set -c 1 -g performance'); shell_exec('cpufreq-set -c 7 -g performance'); shell_exec('cpufreq-set -c 2 -g performance'); shell_exec('cpufreq-set -c 8 -g performance'); shell_exec('cpufreq-set -c 3 -g performance'); shell_exec('cpufreq-set -c 9 -g performance'); shell_exec('cpufreq-set -c 4 -g performance'); shell_exec('cpufreq-set -c 10 -g performance'); shell_exec('cpufreq-set -c 5 -g performance'); shell_exec('cpufreq-set -c 11 -g performance'); sleep(1); shell_exec('kill $(pidof nvidia-persistenced) &'); } if (\$argv[1] == 'Windows 10' && \$argv[2] == 'release' && \$argv[3] == 'end'){ shell_exec('date +"%b %d %H:%M:%S libvirt hook: VM just turned off. Enabling Nvidia persistence mode on host" >> /var/log/syslog'); shell_exec('cpufreq-set -c 1 -g powersave'); shell_exec('cpufreq-set -c 7 -g powersave'); shell_exec('cpufreq-set -c 2 -g powersave'); shell_exec('cpufreq-set -c 8 -g powersave'); shell_exec('cpufreq-set -c 3 -g powersave'); shell_exec('cpufreq-set -c 9 -g powersave'); shell_exec('cpufreq-set -c 4 -g powersave'); shell_exec('cpufreq-set -c 10 -g powersave'); shell_exec('cpufreq-set -c 5 -g powersave'); shell_exec('cpufreq-set -c 11 -g powersave'); sleep(2); shell_exec('nvidia-persistenced &'); } #end vm_nvidia_pers_enabler ?> EOF chmod +x $QEMUDFILE Here I'm also changing the CPU core's power state assigned to the VM. Not sure how much I'm saving with that though. Of course, anyone using this will need to change the name of the VM from "Windows 10" to whatever
  7. I saw that in your own plugin page you posted before (thank you by the way!). Right now I'm doing it like this: if ($argv[1] == 'Windows 10' && $argv[2] == 'release' && $argv[3] == 'end'){ The end result should be the same, right? Ilike your solution for unifying in a single "if" statements. I can try to do that. I will investigate what @ich777 said about running nvidia-persistenced and killing it after a few seconds. I think this could simplify things if we just do that after VM is powered off. However, I think it takes about 15-20" to lower the power usage, so if during that time you try to start your VM it will error out. Am I correct? If so, then turning off persistenced before running the VM is probably a good practice I think
  8. Thats interesting. I'm literally modifying the qemu file from <?php if (!isset($argv[2]) || $argv[2] != 'start') { exit(0); } to <?php include 'start_stop_vm_script.php'; if (!isset($argv[2]) || $argv[2] != 'start') { exit(0); } I added that PHP file in /usr/local/emhttp/, and basically has something similar to this: if ($argv[1] == 'Windows 10' && $argv[2] == 'prepare' && $argv[3] == 'begin'){ shell_exec('date +"%b %d %H:%M:%S libvirt hook: VM just turned ON. Disabling Nvidia persistence mode on host" >> /var/log/syslog'); shell_exec('kill $(pidof nvidia-persistenced) &'); sleep(2); } Does something similar to detect when it's being turned off. I don't like modifying the qemu hooks file, so I will defenitelly look at your method (it's newer than my script). It's important that we start/kill nvidia-persistenced before anything else in the VM, or it will fail. If qemu.d is called AFTER the VM is already starting/stopping, then it may not be an option.
  9. @mgutt After a few months of this thread I wanted to share that I created my own user script that simply calls nvidia-persitenced when Unraid is started, and it modifies the qemu file to disable persistenced when a specific VM (Windows with passed through the nvidia card) is started. When the VM is off it enables persitenced again. With this I managed to get quite some energy savings, from 100W to around 45W. I'm thinking on converting this to a plugin so others can benefit from it, but I'm not sure if this solution I took maybe was overkill, or it would really benefit the comunity. What's you opinion? Also interested in @ich777's feedback too. I'm not sure if what I'm doing should be done automatically by the system, enabling nvidia-persistenced after some idle time and releasing it when we start the VM using the card. From my tests, if I try to start the VM without killing nvidia-persistenced before it just erros out. Thanks!
  10. Do you mean the allocated space in the VM tab of Unraid? What I see there is quite strange. If the VM is off it says 407Gb allocated (out of 600). If the VM is on then it's 583Gb allocated. If I check on windows it's using 407Gb. Of course, if I try to copy or remove a file to/from the VM the size doesn't increase or reduce. Unmap is added of course.
  11. Thanks for trying anyway Is it really recommended to have windows identify it as an SSD or Unraid will take care of the writing appropriately? Any drawbacks on changing Virtio to SCSI? And... how would one do that?
  12. Thanks. I tried adding rotation_rate='1' as you suggest but it errors saying it only works with SATA, SCSI or IDE. It's Virtio so I tried changing it to SCSI and the VM blue screens to death. I had to turn it back to Virtio and remove rotation_rate='1' I think I may need to somehow change the driver to SCSI before changing the config of the VM. Not sure how though. Also, Virtio is supposed to be the best performant (at least that's what Unraids help says), no not sure if the change is worth it. Thanks again!
  13. Thank you for your reply @JorgeB. As I mentioned in my original post, I already did that but "HDD" still shows in the task managr. Does that mean that "HDD" is going to show regardsless? Should I not worry about that?
  14. Hello unraiders I have a Windows 10 VM inside Unraid phisically residing in the SSD cache. The "C:" drive is a thin provisioned file (not passthrough), and G: is just the Google Drive virtual unit stored inside C:. It has all the Red Hat virtio drivers installed and so on. However, Windows detects the device as an HDD in task manager instead of SSD. In the defrag console it displays as "thin provisioned disk" (sorry, the picture is in spanish). I'm guessing Windows is not taking the full advantages of an SSD, and possibly not using TRIM. Questions are: - Is it really required to change it to SSD somehow? The underlaying OS (unraid) should treat it as an SSD anyway - If so, how can I change it to SSD? Below is the disk definition. I already tried adding "discard='unmap'" after cache (following another forum), but it still shows the same. Thank you!! <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Windows 10/vdisk1.img' index='2'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </disk>
  15. In the end it's not a matter of the number of cores, because with less cores it's happening also, although much much less frequently. However I think I found the issue. I'm passing through an NVME and a RTX 3070 graphics card with it's audio. If I pass through the NVME and the audio device, the VM gents stuck. If I only pass through the NVME and not the audio, it's fine. Likewise, if I pass the audio and not the NVME it's fine as well. So basically it can't handle having both NVME and graphics card audio at the same time. I don't know why this happens, but using a normal virtual disk inside the NVME works just fine. My suspicion is that my 11th gen Intel i5 has some sort of common PCIE bus for these devices that just doesn't work well with Unraid. I spent almost a month trying to fix this issue and I'm quite angry at Unraid right now. I hope to recover the love after spinning a few containers and VMs. I hope this helps others.
  16. Thank you for replying. I leave 0 and 6 for unraid and a docker container and isolate the rest and assign just to the VM. I always add them in pairs. So 2, 4 or 6 cores seem fine. But 8 or 10 cores just freeze the VM
  17. I found out that I can make the system work without aparent freezes just assigning 6 cores or less. If I add more cores then it just freezes constantly. My CPU is an intel 11600K with 12 cores. Any ideas? I'm desperate! @ich777 @mgutt Of course, I have core isolation stablished for all the cores I try to assign to the VM. And the cores I assign are never used for anything else, just this VM. Also, I always try to add consecutive cores and threads.
  18. Just to make it clearer, when I tried the MSI "fix" I followed this guide, except that using the registry I used the MSI tool instead. Originally I got MSI enabled for the GPU but not the GPU's audio. After the "fix" I got MSI enabled in both as shown by the lspci, but the problem still occurs. I also tried changing the audio to the onboard device and the audio works, but the quality changes. For example, in games some devices sound different, so I need to make that HDMI audio work. Other things I tried are changing the bios from OMFV to SeaBios, Machine from Q35 to i440fx, and creating the VM from scratch. None worked. Guest VM Window's log doesn't give anything either. Nothing while the PC is frozen. Any ideas @ich777 or @mgutt?
  19. I tried this and it's "working". I can even get as low as 42W with only Unraid running, which is great with my current hardware. However, there are 2 drawbacks: - With primary display set to CPU in motheraboard, if no monitor is pluged in the mobo HDMI it will ball back to the PCIE, which does have the monitor installed. The only solution I found was to plug the monitor to the mobo while booting unraid, and then change it to the PCIE. I think maybe getting one of those "HDMI dummy emulators" may work. Any ideas? - The most problematic: the VM gets stuck constantly as soon as I boot it up, which is the same behabior I saw before. The VM get's totally irresponsive for a few minutes, and when it comes back it only lasts a few seconds until it freezes again. How can I fix this? The only way I found to make it work was to either remove the audio device or boot everything in legacy I tried changing the bus and slot to use the same on GPU and Audio and it doesn't make any difference. I also tried to put the audio device in MSI mode but same thing.
  20. Yes. If I enable CSM in the motherboard (required to boot unraid in Legacy mode) then iGpu can't be set as default graphics. I will try this again beginning next week and report back. Thank you both!
  21. My fault. I made so many tests that I think I'm not explaining myself well (and possibly mixing stuff). Basically: UEFI boot without audio device = everything fine (but no audio) UEFI + Nvidia HDMI audio = continuous VM freezes CSM + Unraid legacy boot + Nvidia HDMI audio = everything almost fine. VM got frozen just once In every case, passthrough was done with Nvidia assigned to vfio. Without it, if I remember correctly, the VM could not boot and the UNRAID log said the Nvidia device was busy. Maybe I was doing something else wrong, like not setting the CPU graphics card as the main one (?). Do you think that it should work with UEFI + no vfio + Nvidia graphics plugin + CPU graphics as main in BIOS? If this makes sense I can try this when I get back home in a few days.
  22. Understood. But how do you think I should do that? Without legacy passthrough didn't work well. The computer hanged every few seconds when I had the GPU HDMI audio passed to the VM. If I removed the audio device then it was fine. I spent 2 days dealing with this and only when I set it to Legacy I was able to use the HDMI audio without the VM freezing all the time. Thanks again for your help!
  23. Good to know. Thank you, and good luck! I have tried what you suggested (I hope I'm not missing something). Setting the intel_pstate=passive seems to use about 6W less, so that's good. However "nvidia-persistenced failed to initialize"). I had installed the nvidia driver plugin but since the card is assigned to vfio nvidia-smi doesn't work. I'm guessing "nvidia-persistenced" is the same as "nvidia-smi -mp 1" (?). Do you know how the people in the German post did it to be able to set the nvidia persistence even when the graphics card is passed through? (VM off). I don't know if it's relevant but in order to passthrough the nvidia card without issues I had to also set unraid boot to legacy, and for that I had to enable CSM (compatibility support module) in my Asus board. CSM disables the onboard graphics option, so I can't set the CPU graphics as my main output. See. I tried different configurations like changing my monitor to the onboard HDMI (which is blank after reboot) or not connecting anything, but the nvidia plugin in Unraid still can't talk to the card. However, if I power ON the VM now (cores still in powersave) I think the driver in the VM is kicking in and after a few minutes idle the usage goes down to 48W! Is there any way to have this same result but without turning on the VM? For that I think Unraid must be able to talk to the card while on vfio. Finally, I have seen a couple times the VM getting unresponsive for about 20 seconds. Not sure if this is related Thanks again!
  24. I forgot to mention that I already played with powertop a little, and also tried the auto-tune option, but it didn't make much of a difference. I noticed though there are only C1, C2 and C3 states, and not above that. I tried changing some related option sin the BIOS but that didn't show other states (C7, etc.). Then I read that those states only exist when using battery, so I left that appart. Should I be seeing states above C3? Wow, that's quite a post. I will have to get deep into that, and first of all learn how to run scripts No, I don't use nvidia-persistenced AFAIK, but I will defenitely give this a go. Thank you!