buetzel

Members
  • Posts

    14
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

buetzel's Achievements

Noob

Noob (1/14)

3

Reputation

  1. Uhm... mal ne dumme Frage: gibt's irgendwo eine Zusammenstellung dieser "üblichen Tweaks" oder muss man sich die quer durch's Forum zusammensuchen? Wäre da für entsprechende Links dankbar...
  2. Thank you SO MUCH. I was having the worst time since having set up a Win11 VM four days ago - I experienced crashes in all browsers (Chrome, Edge, Firefox), especially in Websites with games that utilize WebGL/Unity or video, but also just on normal websites. The performance was just bad, also. I looked for the problem mostly in the GPU department, but what finally seems like the solution for me are those two changes to the <cpu> and <cache> lines. I didn't enable nested virtualization because I don't need it and SVM and GPU passthrough were already done. So maybe if someone else experiences browser crashes in a Win11 VM or general slowness - try this one. Also CPU usage was really high before - most of the time between 30% and 60%. Now it's below 10%, even with a browsergame open (kinda idling, though). For the record: this is on a MSI MEG X570 Unify (AM4 platform) with a Ryzen 9 3900X (undervolted to 1.000V for power efficiency), GeForce GTX 1050 Ti, 32GB DDR4/3600/CL16 RAM. Unraid 6.11.5. Difference like night and day. Thanks again.
  3. Okay, now you completely lost me. I can only suspect that you still didn't get my point, so I'm gonna try again. I'm getting really frustrated right now, so excuse me if this gets too direct. You wrote in your PS about the H310 controller not having an UEFI rom. Which is totally irrelevant to me because the mainboard it is physically slotted into, has only BIOS, NOT UEFI, because of the motherboard's age. So there is no legacy mode in the mainboard's BIOS that I could activate. There is only legacy mode. NO UEFI AT ALL. So i literally CANNOT boot in UEFI mode which then could become problematic for the controller. That controller is what connects the drives in my array to my mainboard. This thing is NOT being passed through to the VM. It's what makes my Unraid server my Unraid server. As far as I researched I still needed that tapemod to get said controller to work with my mainboard at all. It didn't work without the tapemod, it works with the tapemod. It was what made me able to use that controller at all in the first place. It may be that disabling a "boot rom option" in the BIOS (! NOT UEFI !) could be another solution for the problem I had, but I don't care, since it works fine with the tapemod. So: what the fuck has any of this to do with a VM being better on OVMF? Also: it would be REALLY helpful for the readability of your posts, if you would make a new paragraph when changing the topic. I will have a look at Virt-Manager, though. It would be nice if it was a solution for my "VM losing the GPU after some changes to the configuration" problem.
  4. This was actually the solution to my problem. Recreating the VM, setting everything as it already was, setting UUID and MAC to what it was before and adding the multifunction part to GPU and audio (audio somehow doesnt work, though - might be a problem of the monitor with speakers I use, I'll try to get that to work somewhen else). VBIOS was in fact not needed. Or maybe it is for the audio... who knows. What I learned from this: backups of VM configuration XMLs bring you only so far. UUID, MAC and that multifunction part are the important things to "backup". Yeah... there's still no UEFI in my system, so I can't switch anything to legacy. The system IS legacy. I might have a look at this again after an upgrade to the server in the future, but for now it is running and I don't change running systems if I don't have a reason. After getting this to work yesterday I wanted to add two more cores to the VM because it was a bit slow for my tastes (transition from Ryzen 9 3900X baremetal to 4 cores of that L3426 in a VM for power saving reasons IS a big step down). While it mostly was just switching the configuration back to Form View, adding two cores, saving, editing again in XML mode and redoing the multifunction part, it somehow got an error back in there, threw an error on VM start about PCI.1 missing or something and the GPU passthrough was borked again. Had to redo the VM from the newst XML backup again. I found that very strange and very annoying. But I'm currently writing from inside that VM, so I got it to work again. Whatever. You helped me very much, ilarion, and I thank you for that!
  5. Did you want to add something to that quote?
  6. Hm. Well... did it over the Terminal in the Web GUI. So a simple rm linux command.... it didn't complain. They were gone. The free space wasn't reclaimed though, I had to restart the array for that. As I said, I learned that mover doesn't move anything anyway when switching from cache: prefer to cache: no. So even if I had disabled vm's completely, it probably wouldn't have done that. But I can't be sure in that case. But yeah, I learned that the VMs are gone. By seeing them gone. I don't want to create a new VM - the VM per se is okay. Can't switch from SeaBIOS to OVMF though, since that doesn't work after the VM is created. DDU is an idea, though... gonna go and try that. Also: no second graphics card. Also - is OVMF a good idea on an old system like that? Remember: that Xeon is equivalent to first generation core i7. There is no UEFI support on that... so... things from the past might still be relevant for this hardware since it's also a thing from the past. Of course, the Unraid Version still is 6.10.3 currently, started as 6.9.2 i think. Edit: also, that VM is an activated Windows license, so I kinda want to keep that without hassle. And hassle is what you get, when you have to reactivate it as "changed" hardware for more than once (which I already used). So... no recreating as a new VM with a new vdisk with a different configuration.
  7. This is a bit of a rant, so please bear with me. I do have some valid questions at the end of this but I really want to offer as much of the surrounding information as I can give, as to not waste anyones time with a wild goose chase that I already did... Hardware For the hardware changes, to understand all that rambling at all: Server was first on an ASUS P7P55D with a Xeon L3426, with working GPU passthrough, then I switched to ASUS P6T SE with a Xeon L5460 for more cores, didn't get the passthrough to work again on that one, then I switched to ASUS P7P55D-E with an i7 870 for slightly bigger single thread rating (according to passmark ca. 1300 vs ca. 1100 on the Xeons), again, not getting the GPU passthrough to work and yesterday I switched back to the L3426 combination, mainly for power saving reasons (45W TDP vs 80W TDP on the i7 870). Additional hardware: 16GB DDR3 UDIMM RAM (4x 4GB) for the 115x boards, 12GB DDR3 UDIMM RAM (6x 2GB) for the 1366 boards, that damned ASUS GeForce GT 1030, passive, 2GB GDDR5, a Dell Perc H310 controller flashed to IT (non-raid) mode with that electrical tape mod on PCIe pins 4+5 iirc for 7 of the 8 HDDs... should probably move the last one there, too... 8th HDD, 2 SATA SSDs and optical drive are connected to onboard SATA of the mainboard. Of course there's also a case and PSU, but those don't seem very relevant. Also a DVB-S PCI card is somewhere in there, but it's not in use currently, as I haven't connected it to the sat dish yet. What happened? So... after months (okay, with also months in breaks in between), I finally managed to get my Windows 10 VM with GPU passthrough to work again yesterday, after rebuilding my server back to that L3426. I then thought "hey, why not try with a few Linux distros, too?". As I wanted to put those onto my cache pool, too, for speed reasons, I had to clean that one up before. The cache pool is only 120GB in size (2x 120gb sata ssd, btrfs encrypted mirror, btw.), and so I needed all the space I could get. What seemed to be using the most space were the docker.img and libvirt.img in the system share. So I made the stupid mistake to think "hey, those are also on drive 6 - just disable cache use, start mover and they'll be gone". Which of course they weren't, since mover doesn't remove jack shit when changing cache use from "prefer" to "no". As I then learnt, those two versions of the files aren't even synced either, which became pretty clear since after manually deleting the offending files from the cache_pool, my 2 VM's (had just started with a Linux VM when it game to a screeching halt because of "cache pool full") and 6 or so Docker apps were gone. My main problem at this point is: I can't for the life of me get that stupid VM to see the GPU again. It's just gone. Not found in the device manager of that Win10 installation, unless "hidden devices" are shown, but that, of course, doesn't help. What did I try? I did however have a slightly older version of the VM configuration XML (from yesterday, but before getting the GPU passthrough to work), so I used that as a starting point to recreate that Win10 VM with the still existing vdisk.img. I'm pretty sure at this point that the mistake can only be the VM configuration XML file, since I didn't change anything apart from deleting the libvirt.img. Since I got it to work yesterday, the hardware and BIOS of the hardware can't be a problem. Neither can the settings of Unraid in genral, since I didn't change anything after getting it to work yesterday. PCIe ACS override is set to disabled, VFIO allow unsafe interrupts is set to yes - as I said: with these setting it was working yesterday. I bound IOMMU groups 7 (one of the onboard USB controllers for mouse/keyboard use on the front of the case in the VM) and 17 (GT 1030 GPU and its Audio component in one group) to VFIO. This is still the state it was in when working yesterday. I have attached the current VM configuration XML - only deleted UUID and bridge network MAC address from it, otherwise it is the current state. The VM starts and stops as it should, is accessible via RDP and Anydesk (though on Anydesk it is nearly unusable, because no GPU found in the system, so 640x480 resolution which just isn't workable in Win10 anymore). Also the Bluetooth dongle I put into that passed through USB port seems to work just fine for mouse and keyboard, which I was able to confirm via Anydesk. Does anyone have any idea, what could be wrong with that XML? I'm relatively sure that putting GPU and its audio device on Bus 5, Slot 0 and Function 0x0 and 0x1 respectively was what I did yesterday before getting it to work. I'm equally relatively sure, that the edited vbios rom (dumped on a windows baremetal machine from the actual graphics card, then deleted the first bit of the file as per SpaceInvader's video guide with a hex editor) is the one that worked yesterday. I have tried it with the not-edited version of that same vbios dump and without passing the vbios at all as well. I also tried setting the multifunction GPU/Audio device on Bus 4 and also without editing the XML which results in putting the GPU on Bus 4 and the audio device on Bus 5, both as Function 0x0 then. Since the installed driver inside the Windows VM speaks of "PCI Bus 5, Slot 0, Function 0" for the GPU part in the details of the driver under "Location Information" - the GPU gets shown when "show hidden devices" is enabled in the device manager - I think Bus 5 should be the correct one. Configuration as a multifunction device I'm pretty sure was the correct one yesterday. I can't confirm that with the Audio part, since it only gives something like "HD Audio Device" as location information in it's driver details - also hidden of course. When putting both on Bus 6 as a multifunction device, the VM doesn't start anymore but seems to hang in the starting process, causing the first of the 4 pinned and isolated CPU cores for that VM to stay on 100% with the 3 others on 0%. If the VM boots in another configuration (Bus 5 for example for GPU and audio), this is only for a few seconds after which all 4 assigned CPU cores get used and the system then boots. Now the weird thing is, I kind of remember I put it onto Bus 6 yesterday, but I can't remember if that was when I got it to work or not. It really could have been Bus 5, which the state of the driver in the VM suggests. Also: yesterday the GPU passthrough only really started working after I got the Nvidia driver via Windows update while in the VM via Anydesk or RDP - sadly not sure which. But as I said before - the device is still "installed" driver wise, just hidden in the device manager as if it wasn't physically there. Only thing I could still think about would be removing that driver completely from Windows and hoping for the GPU to show up again then... Also: since I'm obviously not nearly experienced enough with libvirt - is there something else that might have been lost due to the deletion of that libvirt.img? What I'm trying to ask: is there something apart from the VM configuration XML, that might be messed up now, that I don't know about? I also attached the log of the VM how it starts and stops (via shutdown command inside the VM) when used with the attached VM configuration XML. For a layman like me it seems okay, notably probably this part for GPU, its audio and i guess PCI.8 is probably that USB port: -device vfio-pci,host=0000:06:00.0,id=hostdev0,bus=pci.5,multifunction=on,addr=0x0,romfile=/mnt/user/isos/vbios/GT1030_GP108_edited.rom \ -device vfio-pci,host=0000:06:00.1,id=hostdev1,bus=pci.5,addr=0x0.0x1 \ -device vfio-pci,host=0000:00:1a.0,id=hostdev2,bus=pci.8,addr=0x1 \ When starting the VM, the Unraid log shows the following, if that is any help: Aug 16 14:17:40 yggdrasil kernel: br0: port 2(vnet9) entered blocking state Aug 16 14:17:40 yggdrasil kernel: br0: port 2(vnet9) entered disabled state Aug 16 14:17:40 yggdrasil kernel: device vnet9 entered promiscuous mode Aug 16 14:17:40 yggdrasil kernel: br0: port 2(vnet9) entered blocking state Aug 16 14:17:40 yggdrasil kernel: br0: port 2(vnet9) entered forwarding state Aug 16 14:17:41 yggdrasil kernel: vfio-pci 0000:06:00.0: vfio_ecap_init: hiding ecap 0x19@0x900 Aug 16 14:17:41 yggdrasil kernel: vfio-pci 0000:06:00.0: No more image in the PCI ROM Aug 16 14:17:42 yggdrasil avahi-daemon[9926]: Joining mDNS multicast group on interface vnet9.IPv6 with address REDACTED. Aug 16 14:17:42 yggdrasil avahi-daemon[9926]: New relevant interface vnet9.IPv6 for mDNS. Aug 16 14:17:42 yggdrasil avahi-daemon[9926]: Registering new address record for REDACTED on vnet9.*. Aug 16 14:17:42 yggdrasil kernel: vfio-pci 0000:00:1a.0: vfio_cap_init: hiding cap 0xa@0x58 I'd really appreciate some input here, especially about something I possibly overlooked regarding the loss of the libvirt.img file. Best regards, buetzel current_vm.xml vm.log
  8. Thanks for the deep link to the relevant part of the docs. If I wasn't a programmer myself I would probably complain why the link on the bottom of the GUI wasn't adjusted for the relevant part of the GUI that's shown at the moment... but hey... I would hate to code that, too
  9. So "preserve assignments" basically just preassigns the drives to the slots they were in before for the new array? That of course is a very useful functionality. Thanks for the warning about not putting data drives in the parity slot. Always good to know. Would have put the same drive as parity "by accident" - I think it's the fastest one and of a different manufacturer, so easy to find out which one it is - but it sure is better to do this on purpose and to know why.
  10. Thank you for your reply, @itimpi. Am I correct in assuming that your first option would be done with the default "Preserve current assignments: none" option in the New Config tool, as I specifically do not want the array slots preserved - two drives being unrecoverable and as such clearing their slots is the whole idea?
  11. Before you continue reading: please don't spend time on this topic if you don't have time to waste. This is rather purely academic for me, no important data was lost, so please carry on with whatever you were doing, if you have anything even remotely important on your plate. So yeah, I'm involuntarily taking that trial part of my trial license very seriously. Apparently I get to try what happens when one is stupid enough to use the wrong modular PSU cable for powering a few hard disks, frys 3 of them permanently and doesn't have enough parity drives in the array to compensate for that stupidity. Well. On the plus side it was only 2x 250GB and 1x 320GB drives, the smallest I still had in use and there basically wasn't anything on them - nothing I couldn't afford to lose at least - but now I'm getting to learn what to do in this - not "real", but "closely simulated" - worst case. So I had a testing array with 4x 500gb, 1x 250GB and 1x 320GB disks, one of the 500GB ones being parity. The two smallest drives were among the three disks I literally burned to hell. Now... the situation presents itself as having - a rather useless parity drive of 500GB, which isn't enough to compensate two drives that have gone the way of the dodo, so I probably have no use for this anymore - or rather: for the data on it - three 500GB drives that should have their data intact - at least if I read that correctly on the forums time and time again - and that means basically everything that was on the array in the first place - the two crispy, kinda smelly ones that only left a red X in their wake and are lying on my desk in front of me, mocking me for my lack of coffee and ability to inspect cables closely enough this morning The array WAS set up to go "fill", so all the data should actually still be there, as the toasties were the last two drives in the array. Just... all the drives were btrfs-encrypted formatted and I'm kinda clueless as two how to access them now. Kinda. Well. I'm not totally clueless regarding linux so I think I could probably mount those drives in the unraid instance itself - with correct parameters for btrfs and encryption of course, which I'll have to google - and then probably share them via samba to get the data from another PC. After that I think I'd have to go via Tools -> New Config to build a new array, then move the data back. Is this about right? Or is there a way to remove those two drives, that are definitely, without question, never in a million years coming back, from the array, rebuild the parity disk and keep the array and the data on it without the movery? Again: please don't spend time on this if you don't have any to spare. I could literally just kill the array and restart via Tools -> New Config and be done with it, no harm done. By the way: is there any central writeup what to do in such a case where the array is most definitely dead? For example... in the docs? Couldn't find one there, though. Which I find a bit strange, I must say, since I thought unraid was kinda big on redundancy and such... surely remedying a catastrophical failure should be as well documented as possible? Or am I expecting too much here and this gets handled via forums as such problems arise? Maybe because those failures are few and far between? (Kinda doubt that) Or rather: they are very individual, so one writeup wouldn't help much in general? (I could see the point in that). Best regards and happy easter for those who care about that (I don't, though, hence I'm playing with the unraids - yeah, it's already two now). buetzel
  12. Thanks for welcoming me @SpencerJ. It's good to know the german part of the forums is doing so well, but honestly I'm so used to using english in anything IT, I wouldn't even know what words to use for searching for things in german Also, me even noting that english is my second language might have been a mild case of impostor syndrome. Apart from actually talking english (real talking, not typing) I'm pretty much bilingual by now.
  13. So, basically, I understood correctly and will be able to use a Basic license for this machine. I guess having to switch on an external drive after the machine is already booted and has its array started is not a problem for me, as I would be doing my weekly backup before the monthly one anyway, so that actually fits into the workflow just fine. Thanks for clarifying that. As for the multi language section - thanks for the notice, but no thanks. Firstly, I think a licensing question would be best discussed in the language of the company doing the licensing and secondly I found that information in basically any area of IT is easier found in english than in my native german language. Usually I'm pretty fine with english anyway and if I feel I might be not understanding anything fully it doesn't really hurt me to ask.
  14. Hi there. I'm currently in the process of evaluating unraid for my homeserver/nas/backup needs. Licensing for my homeserver/nas will be easy - there's already 13 storage devices in that machine, so Pro it is. However I have a smaller, less powerful machine with 6 HDDs (and only 6 SATA ports for that matter) that I plan to use as a first tier backup machine for weekly backups. This won't be always on, only when doing a backup. Second tier backup will be via external USB HDDs directly attached to this machine to back itself up monthly. And thus the question: Do I need a "Plus" license for that smaller machine, because with the external drive for 2nd tier backups there will be more than 6 storage devices or is a Basic license enough for this, as long as the external drive isn't connected when booting the server/array? If I understood the forum posts I found about this correctly, a Basic license might be enough, as long as I don't try to boot with the external HDD connected, but I could not find a satisfyingly current answer to this question, so I decided to post this question (again?). Also, I find the info on https://unraid.net/pricing regarding this point ("attached to the server before starting the array") not explicit enough. Maybe that's just me with english being my second language, but here I am and sorry if this is obvious to everybody else. Best regards buetzel