Jump to content

[GUIDE] Proxmox Virtualize UnRaid --Experimental with v7


Recommended Posts

Posted (edited)

You Mileage will vary... Form EOL, Repos to what is doing what.. Its comes down to what you want it to do and how you want to interact with it...

While the Dev don't want Unraid Virtualized as it can cause instability. I thought I would share my notes on how I'm running Unraid as a VM under Proxmox... YOU WILL NEED TO USE A HBA FOR UNRAID DISKs this is due to how unraid spins down and handles the array... while disk by ID can work you add extra disk thrashing to them...

To accomplish this you will need to use a HBA... There are quite a few edits and changes that need to be done to the host for the VM and inside Unraid...

Step 1 install Debian Install proxmox
If you install proxmox via debian you get to keep debain security and release. Are able to use grub over system Linux / proxmox boot tool and get a better experience with how boot disk and other storage volumes are used.
Debian install wiki instructions: https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm

*Note that you can replace chrony with any other NTP daemon, but we recommend against using systemd-timesyncd on server systems, and the ntpsec-ntpdate option might conflict with bringing up networking on boot on some hardware. Configure packages which require user input on installation according to your needs. it is best to disable the systemd-timesyncd service to not brick the system on reboot... use chrony form proxmox...
 

???
systemctl disable systemd-timesyncd
systemctl systemd-timesyncd disable


Debian Proxmox Source list to install PVE https://pve.proxmox.com/wiki/Package_Repositories

 

root@pve:~# cat /etc/apt/sources.list
#deb cdrom:[Debian GNU/Linux 12.7.0 _Bookworm_ - Official amd64 NETINST with firmware 20240831-10:38]/ bookworm contrib main non-free-firmware
deb http://deb.debian.org/debian/ bookworm main non-free-firmware
deb-src http://deb.debian.org/debian/ bookworm main non-free-firmware
deb http://security.debian.org/debian-security bookworm-security main non-free-firmware
deb-src http://security.debian.org/debian-security bookworm-security main non-free-firmware

# bookworm-updates, to get updates before a point release is made;
# see https://www.debian.org/doc/manuals/debian-reference/ch02.en.html#_updates_and_backports
deb http://deb.debian.org/debian/ bookworm-updates main non-free-firmware
deb-src http://deb.debian.org/debian/ bookworm-updates main non-free-firmware

# This system was installed using small removable media
# (e.g. netinst, live or single CD). The matching "deb cdrom"
# entries were disabled at the end of the installation process.
# For information about how to configure apt package sources,
# see the sources.list(5) manual.

#Proxmox Sources
#deb http://deb.debian.org/debian bookworm main contrib
#deb http://deb.debian.org/debian bookworm-updates main contrib

# security updates
#deb http://security.debian.org/debian-security bookworm-security main contrib

# Proxmox VE pve-no-subscription repository provided by proxmox.com,
# NOT recommended for production use
deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription
deb http://download.proxmox.com/debian/ceph-reef bookworm no-subscription

#proxmo7.2
#deb http://download.proxmox.com/debian/pve bullseye pve-no-subscription

*Proxmox Sub not required, but helpful in security and usability...

With PVE installed, lets go through Proxmox VFIO / PCIE device pass through.
https://pve.proxmox.com/wiki/PCI(e)_Passthrough
https://pve.proxmox.com/wiki/PCI_Passthrough


#Grub Stuff: https://wiki.ubuntu.com/Kernel/KernelBootParameters  https://manpages.ubuntu.com/manpages/bionic/man7/kernel-command-line.7.html 
nano /etc/default/grub we look at line: GRUB_CMDLINE_LINUX_DEFAULT=
*Order is everything!

consoleblank=0 libata.allow_tpm=1 amd_iommu=on iommu=pt kvm_amd.npt=1 kvm_amd.avic=1 kvm.ignore_msrs=1 intel_iommu=on pcie_acs_override=downstream,multifunction nvme_core.default_ps_max_latency_us=5500 default_hugepagesz=1G hugepagesz=1G transparent_hugepage=always rootflags=noatime pci=noaer pcie_aspm=off intremap=no_x2apic_optout 


Other noteworthy grub options... (these all worked under kernel 5.15 which can be installed on proxmox 8.4 pulled from bullseye repo...
#we want to blacklist the disk by ID in the HBA to assist in less disk thrashing...
modprobe.blacklist=ata-diskID,ata-disID


#TO help force lspci -v devices such as the HBA or G-card into unraid...
pci-stub.ids=
vfio-pci.ids=
 

#GPU Pass Through Remove Frame Buffers
video=vesafb:off,efifb:off,simplefb:off,astdrmfb initcall_blacklist=sysfb_init

-- In my case with the Nvidia opening nouveau for mdevctl and open source drivers, I'm using VGPU...

Its similar but with system linux and proxmox boot tool refresh and the kernel command line:
/etc/kernel/cmdline some grub options don't exist as system Linux options...


Don't forget to update the kernel and grub: if you nano /etc/default/grub then

update-grub



*We want to make a grub boot option for IOMMU and vfio for out HBA. as we want driver in use to be vfio-pci for the VM...

List VIFO PCIDs:

lspci -v

 to list drive in use
06:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9215 PCIe 2.0 x1 4-port SATA 6 Gb/s Controller
       Kernel driver in use: vfio-pci
       Kernel modules: ahci
 

lspci -n -s 01:00

to list vfio Hardware ID


root@pve:~# lspci -n -s 06:00
06:00.0 0106: 1b4b:9215 (rev 11)

My vfio pci ID is 1b4b:9215

Example VFIO option in config:
/etc/modprobe.d/vfio.conf

root@pve:~# cat /etc/modprobe.d/vfio.conf
#HBA-Unraid
options vfio-pci ids= disable_idle_d3=1 enable_sriov disable_denylist

#USB pcie card passthorugh - Unriad
options vfio-pci ids= disable_idle_d3=1 enable_sriov disable_denylist

#Gcard Passthrough options- Nvdia - try VGPU first...
#options vfio-pci ids= disable_vga=1

#VFIO Comand options:
#parm: ids: Initial PCI IDs to add to the vfio driver, format is "vendor:device[:subvendor[:subdevice[:class[:class_mask]]]]" and multiple comma separated entries can be specified (string)
#parm: nointxmask: Disable support for PCI 2.3 style INTx masking.  If this resolves problems for specific devices, report lspci -vvvxxx to [email protected] so the device can be fixed automatically via the broken_intx_masking flag. (bool)
#parm: disable_vga: Disable VGA resource access through vfio-pci (bool)
#parm: disable_idle_d3: Disable using the PCI D3 low power state for idle, unused devices (bool)
#parm: enable_sriov: Enable support for SR-IOV configuration.  Enabling SR-IOV on a PF typically requires support of the userspace PF driver, enabling VFs without such support may result in non-functional VFs or PF. (bool)
#parm: disable_denylist: Disable use of device denylist. Disabling the denylist allows binding to devices with known errata that may lead to exploitable stability or security issues when accessed by untrusted users. (bool)


Don't forget to update the kernel and grub: If you nano /etc/modprobe.d/vfio.conf then:

update-initramfs -u -k all


Check IOMMU and remapping: 

dmesg | grep -e DMAR -e IOMMU
dmesg | grep 'remapping'


So we should now have IOMMU enabled...

Next udev rule and grub black list for disk by id.
https://pve.proxmox.com/wiki/Passthrough_Physical_Disk_to_Virtual_Machine_(VM) 
YOU HAVE BEEN WARNED!

Hot-Plug/Add physical device as new virtual SCSI/sata disk:

qm set 592 -scsi2 /dev/disk/by-id/ata-DiskID


I will still recommend using a HBA due to the 1 device removing host access to the VM. Doing this may cause other things which is why need to setup rules and other stuff on Proxmox to fix this...
 

apt install lshw

 

lsblk
ls -l /dev/disk/by-id
lshw -class disk -class storage


Example:
root@pve:~# ls -l /dev/disk/by-id
total 0
lrwxrwxrwx 1 root root  9 Nov  9 13:00 ata-WD_easystore_240GB_####### -> ../../sdd
/dev/disk/by-id/ata-WD_easystore_240GB_#######
grub I blacklist ata-WD_easystore_240GB_#######
Here we need a list of ata-WD_easystore_240GB_####### for udev rules...

Create a Udev Rule

sudo nano /etc/udev/rules.d/99-ignore-vm-disks.rules

 

# Ignore disk /dev/sdd
KERNEL=="sd*", ENV{ID_SERIAL}=="WD_easystore_240GB_#########", ENV{UDISKS_IGNORE}="1", ENV{UDISKS_PRESENTATION_HIDE}="1"

^add your disks...

Restart udev and apply rule:
 

sudo udevadm control --reload-rules
sudo udevadm trigger


System Reboot and test...
After rebooting, verify that these disks are ignored by Proxmox and remain untouched by running:

lsblk -f


This configuration should ensure that the specified disks remain untouched by Proxmox and are available for pass-through to VMs.

So Lets recap:

We have IOMMU enabled
We have an HBA ready to pass physical disk to unraid
We have set Proxmox to ignore disk leaving the host to guest VM

Ok we are now ready for a Unraid VM...
as there are some vm ID### Conf Edits required...

Lets begins create the vm set name and ID ###
image.png.d725c8561c5b861834925ea0df88a2ce.png

No Disk Drive, Linux 6.x
image.png.e4d6da3c70ba333612da9e38f2048b54.png

 

It is recommended to use q35 and UEFI it will work for both ifx and sea bios...
image.png.91525b7f28c4f0e024627869c35c9bad.png

 

If you want a 5 GB cach disk... as I went beta and ditched the array for a zfs pool only, This is more for a btrfs partition for unraid swap plugin...
image.png.ee55869806b075640637dad447d87868.png

Otherwise, hit the trashcan, no disk... as HBA will be added for them...
image.png.7637ffc1c9ddc50602343ee11d339204.png


Set # of cores you want it to run... (We will be changing some of the these setting in the vm###.conf)
image.png.da5dee3798b31c9b10816bce0c08e764.png

*Recommend to use type host and set flag based on host intel/amd for microcode...
 

Give Unraid ram Min 8GB NOT BALLOONING RAM! unraid existing in the ram...
image.png.6352da944cd3e94b36830a42749fcc96.png

 

It is recommended to use a virtual model nic such as the intel E1000
image.png.d0881131df3def0dcd6d0c93b20f9125.png

We should now have a VM to add devices too and then edit the conf manual to give it a better system run...
image.thumb.png.2924efaccb4f8de4373c0155befe996a.png

Next let's add the USB Unraid boot disk:
image.png.2c39f04107b1c4cb2ad1d191009bb14d.png image.png.f764f1b53bf013bf3d55e1f8ed210b32.png

 

*Pass the USB via passing the entire port!

Next lets add the HBA
image.png.a352c7f3ddd51504225b29eae9a86951.png
My HBA register as 2 separate devices under the same VFIO PCID. YOU MUST PASS BOTH IN IRQ ORDER!

image.png.4d51b44d08ebaa4acfafcbe042204c67.png

We are now ready to edit the config manually

Locate the VM Configuration File
VM exist here: /etc/pve/qemu-server/
 

cd /etc/pve/qemu-server/


ls to see all of your PVE VM data templates...
in our case our ID is 105 so we will edit 105.conf
 

nano /etc/pve/qemu-server/105.conf


image.png.f98a7943209c007b1743ed562f0c7cd3.png

We need to fix some things...
1 add args. If your AMD:

args: -device amd-iommu

This allows Immu for VM in unraid...

If intel use
 

args: -device intel-iommu


next we need to change the cpu: option...

 

cpu: x86-64-v2-AES

to:
 

cpu: host,hidden=1,flags=+pcid

*its recommend to run hidden like windows as this hides that kvm is running and can help with pcie device passthough and help stop code 43 errors...

Save and exit. You now have a good to go Unraid VM:

Last, we set unriad USB to boot first. 
image.png.57a471ebfef9c407f818cb18ea2de7e1.png

 

*If unraid won't boot with UEFI edit and use sea bios...
image.png.9f33f0d4bebb4b911d4e6c14acd86387.png

Edited by bmartino1
Topic Name change with Typo - Data fixes
Posted (edited)

*I have this working with stable 6.12.13 and beta 7 v4  Unraid v7 rc.1
With that, we should now see a console and the ability to boot into Unriad:

Example of my experimental Test machine for unraid

image.png.3dd4956371bd50afc5a2ea6ee167fa97.png

 

image.png.7965ceab6e5f9c02be8769b150eb3a9a.png

 

image.thumb.png.7e9c1e153a0807c2245f498a1f57205d.png

 

so lets load the Web UI and make some key changes to help fix some small things...

Change 1 since this is beta 7... I use some user script to fix some things here and there...


Some are nice, but all are optional...

What's required... Well, in my case q35 sea bios has a known error noted in the bugs. So q35 uefi doesn't have/show this, and some older and operon had these issues. Kernel 6... so let's disable that feature...
Web UI > Main > Syslinux Configuration:
 

kernel /bzimage
append initrd=/bzroot default_hugepagesz=1G hugepagesz=1G transparent_hugepage=always kernel.cprng-disable.jitterentropy=true

*other syslinux command line posted later... to help vm guest os...

image.thumb.png.b69861583cf99d31642929dfb0cf392e.png

This will fix a cngr rng at kernel start if any for errors. See bug report:


If you want QEMU guest agent installed, we need to look at the extra folder and install some software.
DO AT YOUR OWN RISK!!!

Thanks SimonF! For the QEMU agent, assist and script!
as the last prepackaged binary was for 14, one doesn't exist for 15 / current...

we need the qemu-guest-agent package...
Example 3rd party downloads:
https://slackbuilds.org/repository/14.2/system/qemu-guest-agent/
https://slackware.pkgs.org/current/alienbob-x86_64/qemu-guest-agent-6.2.0-x86_64-4alien.txz.html
https://packages.slackonly.com/pub/packages/14.2-x86_64/system/qemu-guest-agent/

We also need a missing lib...:
https://slackware.pkgs.org/15.0/slackware-x86_64/liburing-2.1-x86_64-2.txz.html


*Do at your Own RISK!!!! Better to use SimonF command in the go file!

cd /boot/extra/
wget https://slackware.uk/slackware/slackware64-15.0/slackware64/l/liburing-2.1-x86_64-2.txz
wget https://slackware.uk/people/alien/sbrepos/current/x86_64/qemu/qemu-guest-agent-6.2.0-x86_64-4alien.txz

great. we now need to reboot to apply the grub setting and install the packages...

we need to make a user script to start the qemu guest agent...

image.thumb.png.d88f3cc07150d6e5f0c2171bf317025d.png


old script save for code prosperity...

#!/bin/bash
sleep 10
chmod +x /etc/rc.d/rc.qemu-ga
/etc/rc.d/rc.qemu-ga start

*Again use SimonF info in post:


So what does adding the qemu guest agent get you?

Some improvements with proxmox qm commands especial for backups... and IP on dashboard:
image.png.fb1d61225fea64972d514cbca136a3e9.png

But wait there more. I'm using VGPU what about the Nvidia Driver to use the device?
Let take a look:

We will need to use the Nvdia plugin thanks to ICH777. Sadly, the plugin will work only for standard NVIDIA graphics cards and works very well!

Step 1
lspci to see if you see your g card

Install the open source Driver: update and download.
Open Source Driver - This package includes the Open Source Kernel modules and the libraries/executables from the proprietary Nvidia driver package.
Supported GPUs: This driver packages only supports Turing based cards and newer!
ATTENTION: If you want to use the Open Source driver with GeForce and/or Workstation cards you have to create the file "/boot/config/modprobe.d/nvidia.conf" on your USB boot device with the contents, before you reboot your server:
options nvidia NVreg_OpenRmEnableUnsupportedGpus=1

follow the instructions... we need that modprobe options...

nano /boot/config/modprobe.d/nvidia.conf
options nvidia NVreg_OpenRmEnableUnsupportedGpus=1

 

then reboot the unraid machine. This wil install the open Source driver.

with unraid fully booted. go to the nvidia plugin (as we mainly wanted the modprobe....and some libs from open source...) click 
image.png.4c189db42915bccc3614c8e30eff3793.png

 

If using pascal vgpu like a tesla p4 this works with: image.png.1b1527b65b3eab2b86bc948df92a207f.png


and update & Download...
Then reboot, you will now have a working NVIDIA VGPU driver running for bare min stuff like plex transcoding...

Nvidia has removed and is removing support for pascal drivers... If doing VGPU its best to use a turning card...

Edited by bmartino1
Unraid step guide- typo/data
Posted (edited)

So why won't the dev support this. Well that because Unraid and Slackware have additional requirements that a virtualized environment needs... everything above while it has been stable for me may not for you. Let's take a look at some things... 

Let's install the Dynamix System Info plugin and take a look around...

image.png.2817f6227a94a3462fca4c350fa21d23.png

Tools> System Profiler
 image.thumb.png.aaf57de67f732c01cdce16c04ca871e1.png

 

Bios:
image.thumb.png.2cd1d36336b150c30fcc19bcbff08f27.png

 

Motherboard:
image.thumb.png.4ea023c2c65ab16c962c20ab4146aacb.png

 

Processor
image.thumb.png.b50657ffea257b06b7ad4f4ea6022a1a.png

 

Cache Memory

image.png.521f10289355583838d1be4d3ea83e01.png

 

Memory Summary
image.png.e80b7bfa2afaa5fde48ac84a7a3adf67.png

 

Memory Devices
image.thumb.png.0cbd66b89656db8b148751f1ac7c1dec.png

 

Ethernet
image.thumb.png.710e5a3e83de23c188e7c01efe63b1d6.png

As you can see...

The Virtualized instance that is this Unraid is missing data for what its Motherboard is...Ram shows up as 1 dim... and other data is missing...

 

Some CPU data and information, so I'm processor locked in to using 2.0 GHZ

(This is what proxmox reports to the VM as a static Value review Proxmox Summary of the VM and see the full threads/cores that are being used in load balancing...) So When I have a AMD 2.5 GHZ / overclocking... I'm still getting my performance (there is some loss being nested...)

Its For This reason That YOU SHOULD NOT Virtualize UNRAID, and if it is virtualized it will be unsupported! Do so at your own risk!

Then you w will then be playing the virtualized bare metal game and workaround game...
That said there are some benefits to virtualization...
Mainly in backups, depending on how you handle disk.

Another example is to run truenas scale for nfs/samba and full zfs implementation, then run a  VM virtualize a single cached disk and pass the unraid USB to make a single cache btrfs vdisk in truenas and use unraid for dockers only....

Edited by bmartino1
Unraid Disclaimer and warning - typo/data
  • bmartino1 changed the title to Proxmox Virtualize UnRaid --experimental
Posted
45 minutes ago, bmartino1 said:

If you want QEMU guest agent installed we need to look at the extra folder and install some software.
DO AT YOUR OWN RISK!!!

It's already in Unraid.

 

Just need to add this to go file.

 

/usr/bin/qemu-ga -l /var/log/qemu-ga.log -d &

  • Thanks 2
Posted
4 minutes ago, SimonF said:

It's already in Unraid.

 

Just need to add this to go file.

 

/usr/bin/qemu-ga -l /var/log/qemu-ga.log -d &

Awesome, Thank you, good to know!

Posted

Awesome guide! I will review it and make changes to my installation. I migrated my unraid from ESXi to proxmox a few months ago and would like to run as stable as when it was on ESXi.

Posted (edited)
On 11/10/2024 at 8:43 PM, bmartino1 said:

this is due to how unraid spins down and handles the array...

First: great write-up!


I have been passing disks with no problems, sure I have to deal with smartctl and spindown on the Proxmoxhost. I am now SSD-only, but when I used spinning rust, I simply ran a ”hdparm -S60 /dev/disk/by-id/YOUR-PASSEDTHROGH-DISK-HERE” on Proxmox startup to make the passed disk spin down after 5 min. Worked like a charm

Edited by BarbaGrump
  • Like 1
Posted (edited)
On 11/15/2024 at 11:09 AM, BarbaGrump said:

First: great write-up!


I have been passing disks with no problems, sure I have to deal with smartctl and spindown on the Proxmoxhost. I am now SSD-only, but when I used spinning rust, I simply ran a ”hdparm -S60 /dev/disk/by-id/YOUR-PASSEDTHROGH-DISK-HERE” on Proxmox startup to make the passed disk spin down after 5 min. Worked like a charm


Glad this is helping people. I didn't realize there was a separate forum location when i originally made the post... whoops! 

The reason why I went proxmox is for VGPU... better VMS etc... Unraid is my go to for docker / lxc and with it already stable with my data and samba share i don't see why it shouldn't be a vm... if this is working in stable 7 release or if I can get this plugin to work with the patch / normal vgpu by running the nvidia.run file...
https://github.com/stl88083365/unraid-nvidia-vgpu-driver

I may go back to unraid...

Cool, I was aware of hdparm and other things one could do... but even at start script with disk by id, there are other instances of similar sh corn tasks for hdparm and other commands with lshw and other tools to check proxmox hold on disk. There is more to udev rules and blacklisting... I do other things as well but as a quick baseline as a lot in on proxmox forum and their wiki... 

Mainly rant:
*If you have the blacklist and udev from above for disk by id passing, that should be enough from older testing in proxmox 6/7 8.4 is new to me and I HATE kernel 6 for alot of things they broke... S*#T just worked in kernel 5 and still does... Proxmox and with EOL of kernel 5 this may not be viable for VGPU stuff latter... but hey it works now and currently is stable for me... (you should also install linux headers...)

apt-get install pve-kernel-5.15

*Especial for VGPU with Pascal cards...

I recommend turning card like the 2080 and polo guides to vgpu...
https://gitlab.com/polloloco/vgpu-proxmox

It more going through the kernel 6 10000 lines that were removed as a lot referenced other lines, and they are still working through some of the blowback... ALL for new ARM stuff but with open source and different types of arm... its a hole thing that i could rant on ....
#Rant over...

I Will still recommend a HBA...
its easier to test and trouble shoot things later with the ability to boot bare-metal to the unraid flash for testing if bug stuff is needed for lime tech. yes, there are some side things you can do at proxmox host level. I will always recommend a HBA over disk by id passing. (not much changes at first boot to have unraid on bare metal with a HBA as well for support if needed...)

For Me, Its has more to do with proxmox smart mon tools and auto running smart. since a bad experience in the past... when doing test with truenas with disk by id pass truenas and proxmox had more disk thrashing and problems. HBA fixed that.

When testing in the past with unraid I found issues with how unraids array touches disk and found that proxmox would also try to stop unraid from spinning down the disk causing thrashing and other read writes. I rather have unraid control the disk, record smart etc. Hba fixes that...

On another note...
so when using proxmox installer it makes logical volumes which may also auto try and mount my zfs pool I used in unraid / truenas... and that why I will recommend installing debain first as it can be a nightmare to not have proxmox do thing stuff to other disk meant for VMs.

HBA can fix that too but it about wht proxmox kernel boot sees and what fully booted Host see and when proxmox decides to do some disk actions... So, I recommend the blacklist and udev rule and to look into other promxox configs, crons, log rotate and how its default parameters are set and to adjust to your needs if doing disk by id.
 

Edited by bmartino1
spelling
  • 3 weeks later...
Posted (edited)

Posting more for share among friends for easier code grab and go...
with version 7 rc1 released thought I would d add some extra info.

I run AMD. i can almost clam this to be just as stable now...

a side proxmox configuration Nested Virtualization... (turtles all the way down.)
 

cat /sys/module/kvm_amd/parameters/nested

^- show "1" if nesting is enabled...

AMD:

echo "options kvm-amd nested=1" > /etc/modprobe.d/kvm-amd.conf
modprobe -r kvm-amd
modprobe kvm-amd

Intel:

echo "options kvm-intel nested=Y" > /etc/modprobe.d/kvm-intel.conf
modprobe -r kvm-intel
modprobe kvm-intel

^- to enable nested VM....


Also since microcode is applied at host level on Proxmox let make sure we have both the mitigation and microcodes for intel and AMD...
 

apt update && apt install intel-microcode amd64-microcode swtpm 


Backup - This is my 100.conf I use on amd for my BMM-Unraid test Unraid instance
 

root@pve:/etc/pve/qemu-server# cat 100.conf 
#10 cores 16 GB of Ram
#PCIE Pass through%3A
#HBA with 3x16TB disk
#-PVE grub and mod probe blacklist for disk by id. if disk ever change, update there...
#-USB pcie device for offsite backup and other docker devices.
#
#System is meant for Data storage and Dockers only!
agent: 1
args: -device amd-iommu
balloon: 0
boot: order=usb0
cores: 10
cpu: host,hidden=1,flags=+md-clear;+ibpb;+virt-ssbd;+amd-ssbd;+amd-no-ssb;+pdpe1gb;+aes
efidisk0: data-pve:100/vm-100-disk-1.qcow2,efitype=4m,pre-enrolled-keys=1,size=528K
hostpci0: 0000:04:00,pcie=1
hostpci1: 0000:06:00,pcie=1
hostpci2: 0000:0e:00,pcie=1
hostpci3: 0000:01:00.0,mdev=nvidia-259,pcie=1
hotplug: network
machine: q35
memory: 16384
meta: creation-qemu=9.0.2,ctime=1727936737
name: UnRaid
net0: virtio=BC:24:11:A1:95:73,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: data-pve:100/vm-100-disk-2.qcow2,backup=0,iothread=1,size=5G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=51090d87-06eb-40b1-84c4-604c0bb7594b
sockets: 1
tpmstate0: data-pve:100/vm-100-disk-0.raw,size=4M,version=v2.0
usb0: host=3-7
vmgenid: b5a6e175-eb0d-41ac-9180-232addccaad3
root@pve:/etc/pve/qemu-server# 

* Note the CPU Line. This is for AMD Proxmox fixed most of the setting flags in the 

image.png.3870b3deec3d4ee2fd52ab58a76c0729.png

*Depends on your CPU and unraid syslog... with v7rc1 i need to add flags for amd to report that mitigation are in affect...
So I added extra flags depending on your host processor. Proxmox has introduced virtual iommu. I rather deal with bare-metal iommu...

If you trust the host proxmox, you can use plugin on Unraid and disable mitigation as well...

example in logs:
Dec 3 21:48:36 BMM-Unraid kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 3 21:48:36 BMM-Unraid kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 3 21:48:36 BMM-Unraid kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode

as a VM mitigation may not apply...

also a updated unraid syslinux/grub command line:
 

Command line: BOOT_IMAGE=/bzimage initrd=/bzroot default_hugepagesz=1G hugepagesz=1G transparent_hugepage=always kernel.cprng-disable.jitterentropy=true acpi=force pci=nocrs amd_iommu=on iommu=pt mitigations=auto noapic

*To help with Host VM integrations.

 

Explanation of Each Parameter

 

default_hugepagesz=1G and hugepagesz=1G:

Optimizes memory usage for high-performance workloads by enabling 1GB huge pages.

 

transparent_hugepage=always:

Ensures huge pages are used wherever possible to improve memory performance.

 

kernel.cprng-disable.jitterentropy=true:

Disables jitter entropy for better performance on virtualized environments.

 

acpi=force:

Forces ACPI to enable power and system management on virtualized environments.

 

pci=nocrs:

Avoids conflicts with PCI resource allocation reported by ACPI.

 

amd_iommu=on:

Enables AMD IOMMU for PCI passthrough and device isolation.

 

iommu=pt:

Sets IOMMU in passthrough mode for improved performance on virtualized environments.

 

mitigations=auto:

Automatically enables all available CPU mitigations for vulnerabilities like Spectre and Meltdown.

 

noapic:

Disables Advanced Programmable Interrupt Controller (APIC) for improved stability in certain virtualized setups.


With testing this I have had a uptime of more then 2 month on v7 beta 4 now testing with v7 rc 1 as things seem to be working great with hardly any noticeable issues that I can find atm.

Edited by bmartino1
7RC1 typo - data
  • bmartino1 changed the title to [GUIDE] Proxmox Virtualize UnRaid --Experimental with v7

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...