SpaceInvaderOne

Community Developer
  • Posts

    1741
  • Joined

  • Days Won

    29

Everything posted by SpaceInvaderOne

  1. Hi, glad it's working. You won't see any difference in performance. You are not actually using a vdisk this way, you're directly passing the physical disk to the VM by its ID
  2. Switching gpu and audio devices to use msi interrupts in your Windows vm can reduce audio stuttering by optimising how interrupts are managed. Download this https://www.dropbox.com/s/y8jjyvfvp1joksv/MSI_util.zip?e=2&dl=0 Then run the utility in the vm. It displays a list of devices and their current interrupt mode. Select all gpu and audio devices and set them to use msi interrupts. Save your changes and reboot the vm to apply these settings.
  3. have you tried temporarily passing them to a different vm to see if its a windows issue or the host (Unraid)?
  4. If a windows vm from a physical machine harddrive converted to vm i would use defaults but choose q35 chipset
  5. Don’t bind the SATA controller; simply pass the SSD directly. Run this command in the terminal: ls /dev/disk/by-id/ This will display a list of all the disks on the server by their ID number. For example, here's what it shows on my server (though I've only included one SSD disk for clarity) root@BaseStar:~# ls /dev/disk/by-id/ ata-CT2000MX500SSD1_2117E5992883@ ata-CT2000MX500SSD1_2117E5992883-part1@ If you want to pass this SSD to the VM in the vdisk location in the vm template, instead of selecting "none", you should specify manual and put /dev/disk/by-id/ata-CT2000MX500SSD1_2117E5992883 (Note that I removed the '@'). Here, I am passing the entire disk, not just a partition (the second listing of my disk you see ends with -part1). I hope this helps!
  6. IMO you have too much pinned on the first core. Unraid OS uses this itself alot for its functions so can cause issues with your stuttering. 1. So isolate only the last 3 cores. Pin those to that to the vm. 2. Leave the first core unpinned (0,6) 3. Pin your containers to cores 2 and 3 (1,7 2,8) 4. I wouldnt bother using emulatorpin let Unraid handle that itself it will most likely use the first core anyway. Give that a try. It may help
  7. Its difficult to know why your VM is freezing. More info would be needed. But as an example of troubleshooting this the OP probem we can see from the log. If we look at the log he posted it shows 2020-09-15T18:08:05.594959Z qemu-system-x86_64: vfio_err_notifier_handler(0000:03:00.0) Unrecoverable error detected. Please collect any data possible and then kill the guest 2020-09-15T18:08:05.599453Z qemu-system-x86_64: vfio_err_notifier_handler(0000:03:00.1) Unrecoverable error detected. Please collect any data possible and then kill the guest The message points to an unrecoverable error occurred with a passthrough device 0000:03:00.0 and 0000:03:00.1 which from the diagnostics show us its a 1050ti GPU 03:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP107 [GeForce GTX 1050 Ti] [10de:1c82] (rev a1) Subsystem: eVga.com. Corp. GP107 [GeForce GTX 1050 Ti] [3842:6255] Kernel driver in use: vfio-pci Kernel modules: nvidia_drm, nvidia 03:00.1 Audio device [0403]: NVIDIA Corporation GP107GL High Definition Audio Controller [10de:0fb9] (rev a1) Subsystem: eVga.com. Corp. GP107GL High Definition Audio Controller [3842:6255] Kernel driver in use: vfio-pci So for the original person who made the post this was causing his freezes. Why he was having this error is hard to know but can be caused from different things. For example could be a hardware fault with the GPU or the motherboard. If i was advising the OP I would say to 1. update the bios in the motherboard to the latest. 2. making sure to also pass through a vbios for the GPU 3. Try putting the GPU in a different pcie slot. 4. Trying another GPU if possible and seeing if the problem continues. Now what i have said to do here is only specific to the OP and your problem could be totally different. But if you find your issue is from passthrough you could try this.
  8. Yes its very straight forward. We can use the qemu-img tool to convert a virtual disk image to a physical disk. If the image is a raw image (by default in Unraid it is) then you would use this command qemu-img convert -f raw -O raw /mnt/user/domains/vm_name/vdisk1.img /dev/sdX If the image was a qcow2 image then the command would be qemu-img convert -f qcow2 -O raw /mnt/user/domains/vm_name/vdisk1.img /dev/sdX So the /dev/sdX refers to the disk you want to copy it to. Plug the disk that you want to use for the real machine into the server and it will get a letter. So for instance the /dev/sdg Be careful its the correct one as you dont want to write over an anraid array disk etc. Also just make sure the location of the vdisk for the vm is correct for this part /mnt/user/domains/vm_name/vdisk1.img
  9. Please try a full reboot on the server. Also clear the cache on your browser.
  10. I wonder if a windows update was responsible for this. The reason I think so is because of the restore point that you dont remember making. Significant Windows updates, like feature updates or semi-annual releases, Windows typically creates a restore point automatically. So i would guess this update caused an issue.
  11. Also a possibility. As you are passing through both the sound of the GPU (which you should when passing through a GPU) and the motherboard sound device. So in windows there will be 2 devices for audio. I assume as you are passing through the motherboard audio is because that is your preference to use that. Probably Windows will put the default to be the HDMI sound from your 3070 and the secondary the onboard. So if the sound is going through the Nvidia sound and you dont have speakers in your monitor then that is why. Try switching the defauly audio out to the secondary audio. Just a possibility but maybe why.
  12. I like keeping my vms on a ZFS Zpool. I have a dataset "domains" then a dataset for each individual vm. From there you can snapshot each directly from the zfs master plugin, or as i do automatically using sanoid. From here you can then use zfs replication to replicate the vms with snapshots to another Zpool. This is very efficient and a great solution in my opinion. To make this easier I have made some scripts to automatically do this. If you are interested in doing this check out these 2 videos.
  13. Ha lol. Oh no Skynet knows about me, I must hide from the AI !! Thanks for your kind words and I am glad that you like the tutorials. Have a great day
  14. Hi. Let me see if I understand the issue. You are passing through an Intel WiFi device to a macOS VM. I assume you are also passing through an AMD GPU as well. The AMD card is resetting correctly, but the Intel WiFi device is not. Is the Intel WiFi device built into the motherboard? I wonder why you want to pass through a WiFi device when you can use a virtual NIC for internet on the VM. Is it because the WiFi device is Bluetooth and WiFi combined? If so, I would recommend just not using the WiFi device. Use the virtual NIC and buy a cheap £10 USB Bluetooth adapter for Bluetooth. Hope this helps!
  15. To anyone using EmbyServerBeta There was an update about 8 hours ago which for me broke my emby Cannot get required symbol ENGINE_by_id from libssl Aborted [s6-init] making user provided files available at /var/run/s6/etc...exited 0. [s6-init] ensuring user provided files have correct perms...exited 0. [fix-attrs.d] applying ownership & permissions fixes... [fix-attrs.d] done. [cont-init.d] executing container initialization scripts... [cont-init.d] done. [services.d] starting services [services.d] done. [cont-finish.d] executing container finish scripts... [cont-finish.d] done. [s6-finish] waiting for services. [s6-finish] sending all processes the TERM signal. [s6-finish] sending all processes the KILL signal and exiting. ** Press ANY KEY to close this window ** Temporary fix change repository from emby/embyserver:beta to emby/embyserver:4.9.0.5 Emby will then start and run. Just in a few days when after a new version is up and issue is fixed we can just go back and set tag back to beta.
  16. sorry i think its included in Unraid now. Please see if you can see the log here /var/log/mcelog/ and post it here
  17. Hi Yes we would need you to install the mcelog tool then post your diagnostics. The reason being is this is a Linux tool (which you can install from the nerdpack plugin on CA) is used for logging and interpreting mces ( stands for Machine Check Exceptions) These exceptions are hardware errors reported by the cpu. Modern cpus have built-in error detection which when they detect a problem, they generate an mce. These errors include things like problems with the cpu itself, memory errors, bus errors, cache errors etc etc. They can be - Temporary errors - these are often corrected by the system (os) Intermittent errors-- Occur every so often and not all the time. So these errors are harder to diagnose. Fatal errors -- Obviously more serious because they can cause server crashes or even data corruption. So when these errors are reported it is good to find out what they are as they can indicate potential hardware errors early before causing too much trouble. Howver not all erros mean something is bad. Some errors can be down to quirks of the cpu. If i remember correctly certain Amd cpus have been known to generate some mces that can be considered harmless because they are just part of the processor's normal way of working so they generate an mce under normal conditions. Also the way the cpus firmware or microcode is designed can lead to harmless mces being reported. Various mb bios settings also can especially those related to power management and overclocking etc. So you may want to see if there are bios updates for your mb. But yes without the mce log it will not be possible to know what the errors are as the tool makes them human readable. I hope this helps
  18. Zpools in Unraid can obtain their names in 2 different ways - 1. When you create an independent Zpool in Unraid you name the pool and add the disks. The pool gets its name from what you name it as. 2. However if you format a drive that is part of your array in ZFS format, then that drive, although a part of the array, it is also it's own Zpool. When done this way, the Zpool name is taken from the disk number. So assuming your disk6 is ZFS formatted, it is therefore a Zpool. The pool name being "disk6". A Zpool can contain both regular folders and/or datasets. So the /data in disk6, it could be either a dataset or just a regular folder. But if it is a dataset, then yes, the dataset name would be /data ( a dataset path in ZFS is, poolname/dataset , so your ZFS path would be disk6/data) To see what datasets are in your disk6 (or any other Zpools) install the ZFS master plugin. Then you can see the datasets clearly on the main tab. So if i understand, you say you are using hardlinks. Hard links in ZFS do work but with some limitations. Hard links can only be created within a single dataset but not across datasets. For example, within your disk6/data dataset, hard links can be made between files, functioning just like they would in a traditional filesystem. However hard links cannot span across different datasets in ZFS. This means that you cannot create a hard link between a file in disk6/data and another in disk6/media. This limitation is part of the ZFS design, which emphasizes data integrity and clear boundaries between datasets. Each dataset is basically an isolated filesystem in itself, which has advantages for management, snapshots, and data integrity. But a downside is, it also means that traditional filesystem features like hard links have these constraints. I hope this helps
  19. Hi there. Unfortunately, integrating the ability to install from a usb stick directly into macinabox is not feasible due to the existing mechanisms and how the vm templates are generated. To utilise USB media for installation, a manual edit of a vm template to enable usb passthrough would be necessary. However, a simpler alternative, if you need to install the os from a usb, would be converting your usb media to an image file. This would allow you to utilise an existing image on your usb without requiring direct passthrough of the actual physical device to the VM template.
  20. New Macinabox almost complete. Should be out soon. Hoping for the end of next week or shortly there after Will have a few new features such as Ventura and Sonoma support Also the companion User Scripts will no longer be necessary, the container will do everything itself. Also I plan to add checks so the container can see that your cpu has the correct features to be able to run macOS ie checking for AVX2 etc And a few other new things
  21. Hi. I decided to install Huginn today and was pleasantly surprised to find it readily available in CA in the new apps it had just been added today Thanks so much for this addition I cant wait to start utilizing it. A quick note for anyone installing the Huginn container: you'll need to modify the permissions for the Huginn appdata folder this ensures that Huginn has the necessary permissions (easiest way i find just to use the Unraid file manager plugin to do this) to write in this location for its database etc.
  22. Was working in Unraid 6.10 rc2 but sadely this no longer works in Unraid 6.10 rc3 due to update of either Libvirt to 7.10.0 or QEMU to 6.2.0 This error now happens Also reported here https://www.mail-archive.com/[email protected]/msg1838418.html
  23. This video shows 2 methods to shrink the Unraid array. One method where you remove a drive then rebuild the parity and the second method where the drive is first zeroed then removed preserving the parity.