bastl

Members
  • Posts

    1267
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by bastl

  1. @stFfn Try machine type Q35-2.6. Everything above this version caused issues for me. vDisk Bus set to VirtIO works for me and mounted disk will show up.
  2. Von der ersten Generation würde ich schon fast abraten. Klar, du bekommst sie saubillig hinterher geworfen, aber für das gleiche Geld bekommst du auch schon die 2te Generation, die besseren RAM Support bietet, stromsparender ist bei mehr Leistung. Alle Ryzen profitieren von schnellerem RAM und da ist halt die erste Generation für mich eigentlich schon raus, da es da viele Probleme gab. Und mit nem BIOS Update für dein Board wird sogar auch schon die dritte Generation unterstützt, die ne ganze Schippe nochmal drauf gepackt hat. Wenn die 5000er verfügbar sein werden, werden die Preise für die 3xxx auch ganz schnell purzeln. Ich würde hier nicht voreilig nen 1700 kaufen. Gerade in Spielen ist der Unterschied zw. 1st und 2nd Gen und dann zur 3ten schon enorm. Den Gedanken hatte ich erst, als ich den Post schon abgesendet hatte, evtl. könntest du dir ja irgendwo leihweise ne Karte zum Testen mal besorgen. Ich nutze VMs mit Unraid als DAily Driver nun schon seit 3 Jahren. Angefangen mit nem TR4 1950x für über 2 Jahre und nun seit nem halben Jahr nen 3960x. Klar ist kein eigentlicher Ryzen, die Chiplets und die Unterstützung von RAM sowie der Support von Unraid ist quasi identisch. Gab beim 1950x genauso RAM Probleme zum Start wie bei den 1000ern auch, wenn nicht sogar noch mehr Probleme, da Quadchannel und mehr Komponenten aufm Board. Nach dem ersten Monat rumgefrickel und Austausch mit dem Forum hier, lief dann aber auch alles. 2 VMs mit jeweils eigener GPU sowie USB Controller und ner durchgereichten NVME. Die 2000er Generation war für mich kein Grund zum Upgrade, trotz mehr möglichen Kernen und besserem Support. Support für 3000x Threadripper auf TR4 hat AMD leider gestrichen und so gabs bei mir halt nen Umzug auf ne neue Plattform. Was soll ich dir sagen, lief grundsätzlich alles von Beginn an out of the box und die Unterstützung für Virtualisierung hat sich nochmal verbessert. Aufsplittung der einzelnen Geräte in einzelne IOMMU Gruppen ist nun ohne extra Patch möglich. Ähnliches haben auch viele User von der 3ten Generation Ryzen berichtet, dass sich das verbessert hat. Du kannst inzwischen viel einfacher Geräte an VMs durchreichen, als noch auf der ersten Generation. Die einzigen Probleme die ich in der Zeit hatte kamen eigentlich nur zustande, da ich immer mal wieder Unraids RC Versionen getestet hab. Probleme nachvollziehbar hier im Forum gemeldet und zack beim nächsten stable Release war nen Fix dabei. Achja, Windows Update hatte mir nen Treiber für nen durchgereichten USB Controller zerschossen, wäre aber auch ohne Unraid passiert. Windoof halt 😬 Allgemein zu deinem Board nochmal zu erwähnen, du hast halt leider nur EINEN Slot für ne GPU und 2 PCIE x1, 1xNVME und 4x Sata. Da ist eigentlich keine Luft für ne Erweiterung. Glaub mir, wenn du einmal auf den Geschmack gekommen bist, was du alles mit Unraid machen kannst (Docker, VM, Datengrab) dann willst/brauchst du auch ganz schnell mal noch nen Slot für nen HBA, weitere NVME oder noch mehr Sata Steckplätze für noch weitere Platten. Was auch paar mal schon angebracht wurde "für Unraid selber eine eigene GPU" ist eigentlich irrelevant. Unraid ist im Prinzip dafür ausgelegt headless zu laufen. Das einzige wo es Sinn macht, wenn du nen Plex Docker nutzt und für die Video Dekodierung ne extra GPU brauchst aber selbst das ist fraglich, ob das jeder braucht. Wer streamt schon 5-10 Videos im Haushalt parallel. Limetech selbst sagt, dass die Nutzung von Unraid mit ner GPU für die Darstellung einer GUI nur für die Fehlerdiagnose Sinn macht und eigentlich auch nicht wirklich empfohlen wird. Und selbst das kannst du. Bei mir startet Unraid wenn ich den entsprechenden Booteintrag wähle direkt mit der ersten GPU und würde mir nen Desktop zeigen mit nem Browser. Aber ganz ehrlich, brauchst du nicht. Ich hatte nicht einmal in 3 Jahren darauf zugegriffen. Zumal es eh in ner extrem kleinen Auflösung dargestellt wird und ne Qual ist zu bedienen. Übers Netzwerk in nem Browser oder aus ner VM heraus die GUI aufgerufen ist viel praktischer.
  3. Vom Aspekt des Energiesparens macht es bei dir glaube recht wenig Sinn beides zusammenzuführen, wenn du gleichzeitig eine leistungshungrigere CPU verbauen willst. Als NAS würde dein Server ja weiterhin durchweg laufen und somit auch mehr Strom verbrauchen. Soweit ich weiß, ist nen Passthrough der AMD integrierten Vega nicht möglich. Bin mir aber nicht 100%ig sicher. Dedizierte Vega Karten allein schon machen da genügend Probleme. Nvidia ist definitiv zu bevorzugen. Eine dedizierte GPU würde devinitiv den Stromverbrauch anhebt, was du auch im Hinterkopf behalten solltest, wenn die Kiste den ganzen Tag läuft. Dein 400W Netzteil sollte ausreichend sein für ein Upgrade wenn du es nicht übertreibst mit dem GPU Upgrade. Ne 1050ti sollte ohne Probleme möglich sein. Beim CPU Upgrade solltest du aber darauf achten, dass dein Board die CPU auch unterstützt. 3000er Serie müßte noch drin sein, beim 5000er ist die Frage, ob das Board noch nen BIOS Update erhält. Einige B450 sollen wohl Anfang nächsten Jahres noch Support erhalten. Ist nur die Frage welche Boards genau. Die ersten beiden Ryzen Generationen hatten sowohl mit Unraid die ein oder anderen Probleme gerade beim PCIE Passthrough als auch der RAM Support war nicht ganz optimal. Hier müßtest du dich auch mal schlau machen. Ansonsten sollte der Betrieb einer VM mit dedizierter GPU möglich sein. Ich hab ne 1080ti und ne 1050ti jeweils in eigenen VMs durchgereicht. Allerdings mit nem Threadripper 3960x. TRX40 ist hier recht easy wenns um Passthrough geht. Die meisten Geräte sind sauber getrennt in einzelne IOMMU Gruppen, was Grundlage dafür ist, dass das Durchreichen eines Gerätes in eine VM funktioniert. B450 ist da was die PCI Lanes angeht limitierter und es kann sein, dass je nach Belegung der PCI Slots dir die GPU nicht in eine IOMMU Gruppe separiert sondern mit anderen Geräten zusammen geschmissen wird. Was dann nen Passthrough fast unmöglich macht. Dein aktuelles B450 hat nur einen Slot für ne GPU. Die Option nen anderen Slot zu nutzen zur Fehlerbehebung ist bei dir quasi auch nicht möglich. Bei einigen Konfigurationen funktioniert der Passthrough nicht im ersten PCIE Slot. 100%ige Garantie kann und wird dir leider keiner geben können, dass alles so funktionieren wird, wie du es dir wünschst.
  4. @bigmac5753 Not 100% sure, but 3000 series Ryzen should be supported with Unraid 6.9, currently in beta.
  5. As I said earlier, I also have issues similiar to yours, trying to passthrough the onboard audio controller. Unraid will kinda freeze and I have to reset the server. That was the first issue i stumbled across when I got my TRX40. Issues with the sound card are fixed for me since I noticed the sound card itself is split into 2 USB audio devices which I'am able to passthrough to different VMs and can run them both at the same time. GPU passthrough never was an issue for me. 2 GPUs for 2 VMs, rock stable from day one. Onboard USB Controller passthrough showed similiar issues with Unraid becoming unresponsible. But not all controllers on my board. I think one of the two Starship ones caused the issues for me. The 3.1 controller passes through fine and is the only one I use and need. The FLR warning/error produced by the other controller will be fixed with newer linux kernels. Maybe check the Unraid 6.9 beta and test it or use a FLR patched Kernel. I think there is a custom kernel in one of the unraid forums which include the FLR patch and the vega patch.
  6. The PCI IDs in my syslinux.config is for the audio controller itself, not for the USB audio device. 1022:1487 = onboard audio controller 1b21:2142 = onboard USB 3.1 controller I have kinda similar issue with one of the other USB controllers and an extra PCI card like you. The FLR issue should be fixed in newer kernels. Unraid 6.9 should come with a fix for this.
  7. I only have set the checkbox for one of the USB Audio devices for my VM. The "Starship/Matisse HD Audio Controller" itself I don't use at all but I have a entry for it in the syslinux config to prevent Unraid to use it, just in case. I have no issues with this so far. Everything is working fine. For my main VM I have also an entry for one of the USB controllers in my syslinux config. In my case it's a "ASMedia Technology Inc. ASM2142 USB 3.1 Host Controller" on the Aorus Xtreme. A multi USB hub from Anker with card reader and a couple USB ports are sitting on my desk and is connected to ASMedia controller. The following entry is from my xml for that controller. Bus address might be different for you. Keep in mind, it only works if the controller is in it's own IOMMU group, separated from other devices. <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x46' slot='0x00' function='0x0'/> </source> <alias name='hostdev3'/> <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/> </hostdev>
  8. @mmmeee15 I have noticed something similiar. Not sure what triggers the cursor in the VNC window sometimes disappearing, but it often happens if I change the resolution inside the VM or change some graphic settings. For me it helped to change the cursor appearance in the guest os itself to inverted or a dark one and it will always show again. Somehow the cursor only disappears for me if the guest has it set to a white one what is the default on most operating systems.
  9. A subvol is a requirement for snapshots to work. Only formating a drive with BTRFS is not enough. I think there are a couple things you're not understanding right about BTRFS and its features. Maybe I wasn't clear enough. Source and target both have to be formatted with BTRFS. Snapshots are differences between 2 subvolumes. A subvolume is presented to a user as a share/folder. Small advice, please read the full post from JorgeB again for better understanding how this BTRFS feature works and whats the differences between reflink and snapshots are.
  10. Not sure, what Unraid changes in the future therefore you better don't change the default "Domains" share. In the advanced settings of VM manager you can point it to a new path for your VMs. This is how I did it. example from my setup for my main "VM share": btrfs subvolume create /mnt/cache/VMs This is the first step if you wanna move on with BTRFS snapshots. It creates the subvolume for your VMs, in my case on the cache. At the same time a user share with the same name will be created. You can find it under shares besides all your shares you already have. You can configure it like any other share. Simply move your VMs to this new share and adjust the paths in the xml and thats it. Next step ist to create a read only snapshot (see a couple posts above) in my case "/mnt/cache/VMs_backup" which also creates a new share on the cache which will be the base for future snapshots. Anything changes in "/mnt/cache/VMs" compared to "/mnt/cache/VMs_backup" will be included in a new snapshot. Each BTRFS subvolume is handled as it's own share. As soon as you create a subvolume, a user share will be created by Unraid. Technical you can create a subvol for each VM, but you need an extra script for each VM too and with a couple VMs it can get a messy. It's easier to have lets say "productive VM subvolume" which includes all your important VMs you regularly want to backup with 1-2 scripts and a second path for test VMs in case you play a lot with VMs. In my case as said earlier, my main 5 VMs ar sitting on the cache drive and another unassigned 500GB SSD hosts a couple VMs I use for testing only. When creating a new VM I only have to adjust the path for the vdisk to be included or excluded from the snapshots and thats it. Keep in mind each initial snapshots needs the same space on the target as all the source VMs you wanna backup. Lets say you have 5 VMs each with 100GB allocated you wanna backup. With your idea you have to transfer 500GB data each sunday and the changes during the week. Changes might not be that big, but with only the backups from sunday over 5 weeks you need 2500GB alone on your raid 10 SSD cache. This way you wear out your ssd's really fast. Better use a spinner as target.
  11. Do you have any devices passed through to the VM such as GPU, USB or network controller? Some devices aren't able to reset corretly if the VM is shutdown or rebooted. Only a server restart helps. Most affected are newer AMD GPUs for example.
  12. Keep in mind the only thing which you need for the snapshot feature is that both, source and target are formated btrfs. You can use the cache drive as target, sure, but keep in mind you can't simply copy that backups from this storage to let's say the xfs formated array. Mover won't work if you're planning this. The cache isn't the best solution for this. Imagine your backups filling up your cache drive until full and preventing docker from working or causing issues transfering files over your network to a share using that cache. When Unraid 6.9 build with multi cache pool support is released, it might be a good option to have a second pool only used for backups for example. For restore you simply copy the files you need from a snapshot to where ever you want them. Overwrite a broken vdisk for example or for tests with a new VM to another unassigned device. It's up on you to use cp or krusader, both will work. It's basically a simple file copy for restore. The snapshots are differential and all based on the initial read only snapshot. You can delete all snapshots in between the first initial one and your last one or keep some in between, doesn't matter. Essential is you keep the first one. Over time each snapshot itself will use more and more space because the changes compared to the first one will increase. At some point you have to recreate a fresh up to date initial read only snapshot.
  13. @Jaster Sry, I linked you the wrong thread. Here is the one I use for my snapshots. Method 2 is what I use. The initial Snapshot I do by hand every 2-3 months. For this snapshot I turn all my VMs down to have them in a safe shutdown state. 1. create a read only snapshot of my VMs share. This share isn't the default "domains" share which is created by unraid. It is already a BTRFS subvol on my cache, created like described in the thread from JorgeB and hosts all my VMs. # create readonly snapshot btrfs subvolume snapshot -r /mnt/cache/VMs /mnt/cache/VMs_backup sync 2. send/receive initial snapshot copy to the target drive mounted at "VMs_backup_hdd". This process will take some time transfering all my vdisks. btrfs send /mnt/cache/VMs_backup | btrfs receive /mnt/disks/VMs_backup_hdd sync 3. After that I have 2 scripts running. First script runs every sunday, checking if VMs are running and if so, shutting them down and doing a snapshot. named as "VMs_backup_offline_" with the current date at the end. #!/bin/bash #backgroundOnly=false #arrayStarted=true cd /mnt/cache/VMs_backup sd=$(echo VMs_backup_off* | awk '{print $1}') ps=$(echo VMs_backup_off* | awk '{print $2}') if [ "$ps" == "VMs_backup_offline_$(date '+%Y%m%d')" ] then echo "There's already a snapshot from today" else for i in `virsh list | grep running | awk '{print $2}'`; do virsh shutdown $i; done # Wait until all domains are shut down or timeout has reached. END_TIME=$(date -d "300 seconds" +%s) while [ $(date +%s) -lt $END_TIME ]; do # Break while loop when no domains are left. test -z "`virsh list | grep running | awk '{print $2}'`" && break # Wait a little, we don't want to DoS libvirt. sleep 1 done echo "shutdown completed" virsh list | grep running | awk '{print $2}' btrfs sub snap -r /mnt/cache/VMs /mnt/cache/VMs_backup_offline_$(date '+%Y%m%d') for i in `virsh list --all --autostart|awk '{print $2}'|grep -v Name`; do virsh start $i; done sync btrfs send -p /mnt/cache/VMs_backup/$ps /mnt/cache/VMs_backup_offline_$(date '+%Y%m%d') | btrfs receive /mnt/disks/VMs_backup_hdd if [[ $? -eq 0 ]]; then /usr/local/emhttp/webGui/scripts/notify -i normal -s "BTRFS Send/Receive beendet" -d "Script ausgeführt" -m "$(date '+%Y-%m-%d %H:%M') Information: BTRFS VM Offline Snapshot auf HDD erfolgreich abgeschlossen" btrfs sub del /mnt/cache/$sd #btrfs sub del /mnt/disks/VMs_backup_HDD/VMs_backup/$sd else /usr/local/emhttp/webGui/scripts/notify -i warning -s "BTRFS Send/Receive gescheitert" -d "Script abgebrochen" -m "$(date '+%Y-%m-%d %H:%M') Information: Es wurde heute bereits ein Offline Snapshot erstellt" fi fi 4. The second script runs daily and snapshots the VM as "VMs_backup_online_" with date no matter if they are running or not. Keep in mind if you have to restore snapshots of VMs which where running at the time the snapshot was taken, they will be in a "crashed" state. Not had any issues with that so far, but there might be situations with databases running in a VM which might break by this. Therefore I have set the weekly snapshots with all my VMs turned of. Just in case. #!/bin/bash #description= #arrayStarted=true #backgroundOnly=false cd /mnt/cache/VMs_backup sd=$(echo VMs_backup_onl* | awk '{print $1}') ps=$(echo VMs_backup_onl* | awk '{print $2}') if [ "$ps" == "VMs_backup_online_$(date '+%Y%m%d')" ] then echo "There's already a snapshot from today" else btrfs sub snap -r /mnt/cache/VMs /mnt/cache/VMs_backup_online_$(date '+%Y%m%d') sync btrfs send -p /mnt/cache/VMs_backup/$ps /mnt/cache/VMs_backup_online_$(date '+%Y%m%d') | btrfs receive /mnt/disks/VMs_backup_hdd if [[ $? -eq 0 ]]; then /usr/local/emhttp/webGui/scripts/notify -i normal -s "BTRFS Send/Receive beendet" -d "Script ausgeführt" -m "$(date '+%Y-%m-%d %H:%M') Information: BTRFS VM Online Snapshot auf HDD erfolgreich abgeschlossen" btrfs sub del /mnt/cache/$sd #btrfs sub del /mnt/disks/backup/$sd else /usr/local/emhttp/webGui/scripts/notify -i warning -s "BTRFS Send/Receive gescheitert" -d "Script abgebrochen" -m "$(date '+%Y-%m-%d %H:%M') Information: Es wurde heute bereits ein Online Snapshot erstellt" fi fi I don't have it automated in the way that old snapshots getting deleted automatically. I monitor the target drive and if it's getting full i delete some old snapshots. First command lists all the snapshots and the second deletes a specific one. Don't delete the initial read only snapshot if you have differential snaps building up on that. btrfs sub list /mnt/disks/VMs_backup_hdd btrfs sub del /mnt/disks/VMs_backup_hdd/VMs_Offline_20181125 If you have to restore a vdisk, simply go into the specific folder and copy the vdisk of the specific VM back to it's original share on the cache. The XML and NVRAM files for the VMs aren't backed up by this. Only the vdisks. To backup these files you can use the app "Backup/Restore Appdata" to backup the libvirt.img for example. EDIT: Forget to mention, I use a single 1TB NVME cache device formatted with BTRFS and a single old spinning rust 1,5TB hdd as unassigned device as target for the snapshots. Nothing special, no BTRFS raid involved.
  14. @Jaster Have you tried to use BTRFS snapshots to backup your VMs? You need the source and target to be on BTRFS. I have my cache drive setup as BTRFS and a unassigned drive also formated as BTRFS as target for my backups. The initial backup transfers the first snapshot at takes some time, every new snapshot only transfers the differential changed or new data and is much quicker as a new copy of the VMs. Have a look into the following:
  15. You are missing nothing. If you add VNC as a GPU in your VM config it will be always the first GPU and passthrough a physical GPU in most cases has issues. VNC by most users is used for the initial setup of the VM only. Either you use your monitor and passthrough mouse and keybord to control it directly on your Unraid box or use other remote software like RDP or Anydesk to control it from another pc on your network.
  16. Same for me. I had 2048 already set in my old config and as shown in your screenshot it's the default value. Does the new SWAG template have a different value? The icon for me also doesn't changed to the new one. Not a big deal.
  17. Make sure you optimize your VM especially for first gen TR4. Try not to use cores from different dies for example. You can find a couple tips in the forums how to improve the performance for gaming. However if this helps or not, the game in the current state isn't relly well optimized. I saw some streams with really beefy hardware and it runs like in your case in low 20fps ranges.
  18. You only can see this option if you have set a virtual GPU (VNC). Plug in a monitor in your 6450 and see if you get a output.
  19. Today I got a info from "Fix Common Problems" the container "letsencrypt" is deprecated. So far so good, I had already read a couple weeks ago that you guys have to switch the name for the container, but I never changed my setting until today. What I did so far: 1. stop the letsencrypt container 2. backup the config folder in appdata (copied to new folder called swag) 3. edit the old "letsencrypt" container 4. changed the name to swag 5. switch to "linuxserver/swag" repo 6. adjusted the config path to the new folder 7. starting the swag container 8. adjusting "trusted_proxies" in the nextcloud config.php in /appdata/nextcloud/www/nextcloud/config to swag Did I miss something?
  20. @Cilusse For me VNC Viewer only opens a new tab each time a try to access a VM with a "Failed to connect to server" error. The VM and also the docker logs been shown at the 3rd try not the second time people reporting.
  21. You better don't pay more than 400 bugs for a used 2080ti. The 3070 is around this price and should have the same performance. I don't think people will sell their 2080ti's for such a low price. Always remember what this cards costs new 😁
  22. @mathieuhantz The usual way is to setup a VM without GPU passthrough via VNC, install the virtio drivers for network and make sure everything works. Enabling RDP or setting up remote software like Teamviewer or Anydesk is also a good advice, because as soon as you use a GPU for passthrough VNC should be disabled. Otherwise VNC will always be the primary GPU and the passed through one wont work. The "PCIe ACS Override" option has nothing to do with passing through USB devices like the 3 showed in your screen. It is needed for splitting up groups for PCIe passthrough in case you have a USB controller itself grouped with other devices and you want to use the full controller inside the VM, same for GPUs or network controllers.
  23. @gareth_iowc LS only shows the used space by qcow2 files even if the virtual disk is defined larger. Try the following to get more info about the vdisk and it's actual size. qemu-img info hassos_ova-4.12.qcow2
  24. This is a joke, isn't it? Are you really did this? You better don't!
  25. @duffbeer Trust me the "EPYC tweak" is a custom edit done by the user and not from a setting made by Unraid. Usually only the following is needed and is reported in a couple tutorials here in the forums. In Unraid you only can set it to emulate a QEMU64 CPU not an EPYC or Skylake or whatever. <cpu mode='custom' match='exact' check='full'> <model fallback='forbid'>EPYC</model> <topology sockets='1' cores='7' threads='2'/> <cache level='3' mode='emulate'/> <feature policy='require' name='topoext'/> <feature policy='disable' name='monitor'/> <feature policy='require' name='hypervisor'/> <feature policy='disable' name='svm'/> <feature policy='disable' name='x2apic'/> </cpu> Forcing the VM to use some specific CPU features can end up emulating features wich aren't present in the physical CPU. In general this tweaks are needed for first and 2nd gen Ryzen chips and Threadrippers. Maybe try to remove them and test without. The only one which by default Unraid sets is the following. <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='3' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> </cpu> Try it with the snippet above and adjust the core/thread counts so it matches your needs.