Jump to content

meep

Members
  • Posts

    758
  • Joined

  • Last visited

Everything posted by meep

  1. You could try rebuilding the VM using Q35 rather than i440 machine type. That's got me out of a few pickles in the past. Have you selected your target install disk by this point? It's been a while since I did an XP install? Was there an opportunity to load custom drivers before this point? You might need to load vistor and other drivers from the viirtio iso? Beyond that, I'd be stumped.
  2. try reducing the number of CPU cores to just 1 for the setup process.
  3. Hi @Frag-O-Byte Sorry to hear you're having trouble. I know how frustrating it can be when things don't work as expected. However, do persevere, once it's all working, it's amazing. I'm running 3x VMs permanently, two MacOS workstations with multi-device passthrough and one Win10 headless. They have been rock solid (apart from this past couple of weeks when I added some more RAM that turns out is definitely not liked by my system! You're on the right track by attempting to get one VM up and running with stability first. Anyway, there's a few things you can try.... 1. Try installing GPU drivers not in Splashtop mode but booted to the GPU (if you can). Many GPUs will output a low resolution display without drivers. It might be better to do the install with display attached etc. 2. Try re-making your VM as i440 rather than Q35. I have had issues in the past where, in particular, AMD drivers would not even install in one config, but were fine in the other. (you need to completely remake your VM to make this change) 3. If you haven't already, try using SeaBios as the bios. I've found this more stable/reliable 4. Likely not an issue as you're booting windows, but often MacOS VMs get shirty if the number of CPU cores passed through don't match a real Mac. So, they work well with 4 or 8, but not 5, 6 or 7. Also, Windows VMs used to have an issue applying windows updates when more than a single CPU core was defined. It might be worth your while reducing or adjusting the CPU config, just for testing. 5. In your XML, you should update this; <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x42' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </hostdev> to this <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x42' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x1'/> </hostdev> This is your GPU audio device, and the change is subtle. I've adjusted the bus from '6' to '5' to match the HDMI config, but bumped the function from '0' to '1'. This is a more realistic configuration and, given your issue appears to be with your GPU, might have an impact. You need to make this change in XML mode, and repeat it if you happen to change/re-save your GPU from the form editor. Good luck!
  4. I would tend to agree with you. Next, I’d check motherboard bios is up to date. Strip out as much hardware as you can, ram, GPUs, add in cards etc. try to get the system as bare bones as you can, and start changing bios setting to see if you can make progress.
  5. It's not clear if this is a licensed copy of unraid or not, but my next step would be to download the demo version to a different / new USB thuumbdrive. This will rule out the USB device as the problem. The next step in the boot process is to unpack initramfs, so it seems it's getting stuck at that point.
  6. Also, try reducing memory speed and removing all but 1 or 2 sticks. I added some memory to my TR 2950x system recently and had constant fallovers. Removing the new dimms solved the problem immediately.
  7. Is it a fresh installation of unRaid on the USB? If not, is it possible that there's some VFIO settings or device detachment left over from a previous install? You could be inadvertently detaching the USB controller that the boot thumbdrive is plugged into?
  8. Interesting project! you will set up your 4tb drives in the unraid array. You will have one as parity, the other three as data drives. You will have 12 TB of shareable storage, and parity protection for one drive failure. Parity protection != backup. If the data is mission critical, you should also have a backup to a separate system, offsite etc. once set up, you can go about setting up shares on your array. You have a lot of flexibility here. You could have one share per vm, and have each windows instance mount it’s own. Or you could have one or more shares by function (queue, in progress, completed, output) and have each vm mount each of these. you have a lot of options. you also have a fairly steep learning curve ahead to get this set up correctly and, critically for a production environment, in a stable fashion. What motherboard do you have, as a matter of interest?
  9. So the new memory was the problem for me. I had 64gb in 4 dimms. I’d added another 4x 8GB, but otherwise identical chips and the fun started! Since I removed them, all has been well with the world again. now, I just need to figure out if it’s all the new dimms, just one of them, or maybe one or more slots. I have days if not weeks of testing ahead, but at least I feel I’m on the right track.
  10. Dabbling in Bifurcation, I found an excellent system from MaxCloudOn.com that helps unlock the full potential of my unRAID system (TR2950x based). I've written up my latest setup here; https://mediaserver8.blogspot.com/2020/04/mediaserver-83-bifurcation-edition.html Here's the basic architecture; And the obligatory gratuitous hardware image; Shortcut to IOMMU breakdown; https://drive.google.com/file/d/1r2oTe5mm9T0Kc7HIDAwGGIpXHs1drHza/view?usp=sharing
  11. No help, but I'm in a similar boat with a similar system (Asrock Taichi X399, TR 2950X) The system was pretty solid for months, but I made a few changes over the past couple of weeks and it's been very unstable since. Like you, it will run fine for a few hours, but then will lock up overnight, and will take a few attempts to boot reliably. I ran a memtest today, and it froze at 2 hrs, so that's likely a clue. I've removed the extra 32GB I added and will run again overnight to see if that's the culprit. Not a solution for you, i know, but just wanted to say I empathise!
  12. Hi @acosmichippo You don't need to 'passthrough' unRAID shares via XML. like this. You simply mount them from within the OS, like you would any network volume; Either find the share on your network and double-click it. It will appear on your desktop (if you have finder prefs set to display*), or at the very least will be available in finder; If your server is not showing up under network for some reason, you click Go->Connect To Server in the finder menu, and enter your server and share details; Once you have accessed one or more volumes, you can drag them in to login items for your account in users and groups. This will cause those volumes to mount every time you reboot your VM; *Here's how to configure finder preferences to have everything mounted appear on your desktop; If you have trouble, you need to search online for 'MacOS Network Volume Mounting' or some such, not unRaid, as what you're doing is the same as if you have a MAC on a LAN and you want to connect to a separate fileserver. Good luck!
  13. I have a Clover boot problem that's just started somewhat out of the blue. For the past few days, when I boot my MacinaboxCataloina VM, I just get a blank screen with a solid cursor (it's been working perfectly for several months); System hangs there and no Clover menu loads. I 3x disks defined, my clover QCow, a System disk and a Scratch disk; <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/mnt/user/domains/MacinaboxCatalina/Clover.qcow2' index='3'/> <backingStore/> <target dev='hdc' bus='sata'/> <boot order='1'/> <alias name='sata0-0-2'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source dev='/dev/disk/by-id/ata-Crucial_CT120M500SSD1_14340D03A1FD' index='2'/> <backingStore/> <target dev='hdd' bus='sata'/> <alias name='sata0-0-3'/> <address type='drive' controller='0' bus='0' target='0' unit='3'/> </disk> <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source dev='/dev/disk/by-id/ata-TS64GSSD340_20140731B61710144112' index='1'/> <backingStore/> <target dev='hde' bus='sata'/> <alias name='sata0-0-4'/> <address type='drive' controller='0' bus='0' target='0' unit='4'/> </disk> If I remove both system and scratch disk (the second 2x entries), the VM will load up the Clover boot menu, but obviously cannot go any further due to there being no volumes available. Adding back either of the disks will revert to the black screen and no Clover menu. I've maxed out my Google-Fu and found that this screen can appear if the Clover timeout is set to zero. The solution in this case is to hold down any key at boot to force the clover menu to load. This kind of works for me. I can hold down, say, the space bar and after 3 or 4 attempts, it will load up Clover and I can progress. It seems somewhat sensitive to when the key is first pressed though, holding from VM launch in unRAID UI won't cut the mustard, I've got to find some indeterminate time in the first 5/6 seconds. Now, I checked my clover config, and my timeout is set to 5 seconds. So while the problem manifests and the solution (mostly) works as per my research, the cause is not as described. I've tried replacing the clover qcow image file from a different VM, thinking there might be corruption, I had the same problem (the clover qcpw works fine on the other VM). I'm a bit stumped by this. Not a show stopper, but a major PITA. Any insights or suggestions appreciated. Thanks for reading. Please forgive the horrid state of my monitor - I think my wife was 'cleaning' at the weekend 😞
  14. Can you swap your boot device into one of the usb 3.0 ports on 0e. and passthrough all the 09 controllers, or do you have the same issue there? Otherwise it might be a discrete adapter pending a fix for the issue at hand.
  15. Hmm, you say you can’t pass through 0e.00.0, and show an error that seems to indicate 0e.00.4. Are you saying you e tried 0e.00.3 and it produces the same error? And this is due to a known kernnal issue on the x570 platform? have you tried acs override? Could you drop in a pcie USB adapter and pass that through?
  16. Why pass through 0e.00.0? 0e.00.3 is the usb controller?
  17. I've written up a somewhat more off-the-wall approach to getting discrete USB to 4x VMs using only a single motherboard slot; https://mediaserver8.blogspot.com/2020/04/unraid-discrete-usb-passthrough-to.html#more
  18. I do hope you are correct, but a few indicators suggest that the card you linked to is not the ‘old’ 4x discrete controller card: https://www.sonnettech.com/product/legacyproducts/allegroprousb3pcie.html The old one is a usb3-pro-4pm-e, the linked card is usb3-4pm-e (no ‘pro’) The old one is 4x pcie, this is 1x pcie The old one has 4x controller chips ranged vertically beside the rear-facing ports, not visible on the usb3-4pm-e The old card retailed for well north of $100. At half that, the linked card would be a star bargain I look forward to your findings though, and live in hope......
  19. ISSUE SUMMARY Long story* short, I've had occasion to change/update some of my system resulting in needing a fresh parity sync. Every time I run this, after a few hours, my system crashes (details below), requiring a restart and rinse & repeat. I tried booting in safe mode and the system / sync ran perfectly (~9 hrs) I deduce (maybe incorrectly), that one or several of the plugins is causing my system to become unstable/ But which one? My next step will be to try to identify the rouge plugin through conducting 50% tests, whereby I enable half, and see if I can get the system to crash, trying to whittle things down that way. However, if it's more than one culprit, that might not help, and will take a long time. I post in the hope that someone more familiar with syslogs and diagnosics mightg be able to poing be in the right direction. WHATS THE PROBLEM? Essentially, my system runs for a few hours under load (parity sync or basic rsync data copy tasks), but will then fall over with errors similar to this typical example; Apr 15 22:30:45 UnRaid kernel: docker0: port 1(vethb543e50) entered disabled state ### [PREVIOUS LINE REPEATED 1 TIMES] ### Apr 15 22:30:47 UnRaid kernel: device vethb543e50 left promiscuous mode Apr 15 22:30:47 UnRaid kernel: docker0: port 1(vethb543e50) entered disabled state Apr 15 22:35:02 UnRaid emhttpd: cmd: /usr/local/emhttp/plugins/dynamix/scripts/tail_log syslog Apr 15 22:35:44 UnRaid kernel: BUG: unable to handle kernel paging request at ffff88982491f000 Apr 15 22:35:44 UnRaid kernel: PGD 2401067 P4D 2401067 PUD 1825578063 PMD 1825309063 PTE 800f00182491f163 Apr 15 22:35:44 UnRaid kernel: Oops: 0009 [#1] SMP NOPTI Apr 15 22:35:44 UnRaid kernel: CPU: 1 PID: 44323 Comm: unraidd0 Tainted: G O 4.19.107-Unraid #1 Apr 15 22:35:44 UnRaid kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./X399 Taichi, BIOS P3.90 12/04/2019 Apr 15 22:35:44 UnRaid kernel: RIP: 0010:raid6_avx24_gen_syndrome+0xe2/0x19c Apr 15 22:35:44 UnRaid kernel: Code: c4 41 0d fc f6 c5 d5 db e8 c5 c5 db f8 c5 15 db e8 c5 05 db f8 c5 dd ef e5 c5 cd ef f7 c4 41 1d ef e5 c4 41 0d ef f7 48 8b 0a <c5> fd 6f 2c 01 c4 a1 7d 6f 3c 01 c4 21 7d 6f 2c 09 c4 21 7d 6f 3c Apr 15 22:35:44 UnRaid kernel: RSP: 0018:ffffc90010907d50 EFLAGS: 00010202 Apr 15 22:35:44 UnRaid kernel: RAX: 0000000000000000 RBX: ffff888b83ad4c40 RCX: ffff88982491f000 Apr 15 22:35:44 UnRaid kernel: RDX: ffff888b83ad4c60 RSI: 0000000000000000 RDI: 0000000000000004 Apr 15 22:35:44 UnRaid kernel: RBP: 0000000000000004 R08: 0000000000000020 R09: 0000000000000040 Apr 15 22:35:44 UnRaid kernel: R10: 0000000000000060 R11: ffff888b83ad4c60 R12: ffff8897e2e39000 Apr 15 22:35:44 UnRaid kernel: R13: 0000000000001000 R14: ffff8897ec3a9000 R15: 0000000000000004 Apr 15 22:35:44 UnRaid kernel: FS: 0000000000000000(0000) GS:ffff88982ce40000(0000) knlGS:0000000000000000 Apr 15 22:35:44 UnRaid kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Apr 15 22:35:44 UnRaid kernel: CR2: ffff88982491f000 CR3: 0000001824e78000 CR4: 00000000003406e0 Apr 15 22:35:44 UnRaid kernel: Call Trace: Apr 15 22:35:44 UnRaid kernel: raid6_generate_pq+0x7d/0xb0 [md_mod] Apr 15 22:35:44 UnRaid kernel: unraidd+0xfae/0x136e [md_mod] Apr 15 22:35:44 UnRaid kernel: ? __schedule+0x4f7/0x548 Apr 15 22:35:44 UnRaid kernel: ? md_thread+0xee/0x115 [md_mod] Apr 15 22:35:44 UnRaid kernel: md_thread+0xee/0x115 [md_mod] Apr 15 22:35:44 UnRaid kernel: ? wait_woken+0x6a/0x6a Apr 15 22:35:44 UnRaid kernel: ? md_open+0x2c/0x2c [md_mod] Apr 15 22:35:44 UnRaid kernel: kthread+0x10c/0x114 Apr 15 22:35:44 UnRaid kernel: ? kthread_park+0x89/0x89 Apr 15 22:35:44 UnRaid kernel: ret_from_fork+0x1f/0x40 Apr 15 22:35:44 UnRaid kernel: Modules linked in: iptable_mangle xt_nat veth ipt_MASQUERADE iptable_filter iptable_nat nf_nat_ipv4 nf_nat ip_tables md_mod xfs nfsd lockd grace sunrpc nct6775 hwmon_vid bonding igb(O) wmi_bmof mxm_wmi edac_mce_amd kvm_amd kvm mpt3sas crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel pcbc i2c_piix4 i2c_core aesni_intel aes_x86_64 crypto_simd cryptd raid_class scsi_transport_sas ahci k10temp ccp nvme glue_helper libahci nvme_core wmi button pcc_cpufreq acpi_cpufreq [last unloaded: md_mod] Apr 15 22:35:44 UnRaid kernel: CR2: ffff88982491f000 Apr 15 22:35:44 UnRaid kernel: ---[ end trace 10a9007de2546ba5 ]--- Apr 15 22:35:44 UnRaid kernel: RIP: 0010:raid6_avx24_gen_syndrome+0xe2/0x19c Apr 15 22:35:44 UnRaid kernel: Code: c4 41 0d fc f6 c5 d5 db e8 c5 c5 db f8 c5 15 db e8 c5 05 db f8 c5 dd ef e5 c5 cd ef f7 c4 41 1d ef e5 c4 41 0d ef f7 48 8b 0a <c5> fd 6f 2c 01 c4 a1 7d 6f 3c 01 c4 21 7d 6f 2c 09 c4 21 7d 6f 3c Apr 15 22:35:44 UnRaid kernel: RSP: 0018:ffffc90010907d50 EFLAGS: 00010202 Apr 15 22:35:44 UnRaid kernel: RAX: 0000000000000000 RBX: ffff888b83ad4c40 RCX: ffff88982491f000 Apr 15 22:35:44 UnRaid kernel: RDX: ffff888b83ad4c60 RSI: 0000000000000000 RDI: 0000000000000004 Apr 15 22:35:44 UnRaid kernel: RBP: 0000000000000004 R08: 0000000000000020 R09: 0000000000000040 Apr 15 22:35:44 UnRaid kernel: R10: 0000000000000060 R11: ffff888b83ad4c60 R12: ffff8897e2e39000 Apr 15 22:35:44 UnRaid kernel: R13: 0000000000001000 R14: ffff8897ec3a9000 R15: 0000000000000004 Apr 15 22:35:44 UnRaid kernel: FS: 0000000000000000(0000) GS:ffff88982ce40000(0000) knlGS:0000000000000000 Apr 15 22:35:44 UnRaid kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Apr 15 22:35:44 UnRaid kernel: CR2: ffff88982491f000 CR3: 0000001824e78000 CR4: 00000000003406e0 Apr 15 22:35:44 UnRaid kernel: ------------[ cut here ]------------ Apr 15 22:35:44 UnRaid kernel: WARNING: CPU: 1 PID: 44323 at kernel/exit.c:778 do_exit+0x64/0x922 Apr 15 22:35:44 UnRaid kernel: Modules linked in: iptable_mangle xt_nat veth ipt_MASQUERADE iptable_filter iptable_nat nf_nat_ipv4 nf_nat ip_tables md_mod xfs nfsd lockd grace sunrpc nct6775 hwmon_vid bonding igb(O) wmi_bmof mxm_wmi edac_mce_amd kvm_amd kvm mpt3sas crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel pcbc i2c_piix4 i2c_core aesni_intel aes_x86_64 crypto_simd cryptd raid_class scsi_transport_sas ahci k10temp ccp nvme glue_helper libahci nvme_core wmi button pcc_cpufreq acpi_cpufreq [last unloaded: md_mod] Apr 15 22:35:44 UnRaid kernel: CPU: 1 PID: 44323 Comm: unraidd0 Tainted: G D O 4.19.107-Unraid #1 Apr 15 22:35:44 UnRaid kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./X399 Taichi, BIOS P3.90 12/04/2019 Apr 15 22:35:44 UnRaid kernel: RIP: 0010:do_exit+0x64/0x922 Apr 15 22:35:44 UnRaid kernel: Code: 39 d0 75 1a 48 8b 48 10 48 8d 50 10 48 39 d1 75 0d 48 8b 50 20 48 83 c0 20 48 39 c2 74 0e 48 c7 c7 79 fc d2 81 e8 6f b2 03 00 <0f> 0b 65 8b 05 28 2a fc 7e 25 00 ff 1f 00 48 c7 c7 b9 03 d4 81 89 Apr 15 22:35:44 UnRaid kernel: RSP: 0018:ffffc90010907ee8 EFLAGS: 00010046 Apr 15 22:35:44 UnRaid kernel: RAX: 0000000000000024 RBX: ffff8897e3496c00 RCX: 0000000000000007 Apr 15 22:35:44 UnRaid kernel: RDX: 0000000000000000 RSI: 0000000000000002 RDI: ffff88982ce564f0 Apr 15 22:35:44 UnRaid kernel: RBP: 0000000000000009 R08: 000000000000000f R09: ffff8880000bdc00 Apr 15 22:35:44 UnRaid kernel: R10: 0000000000000000 R11: 0000000000000044 R12: ffff88982491f000 Apr 15 22:35:44 UnRaid kernel: R13: ffff8897e3496c00 R14: 0000000000000009 R15: 0000000000000009 Apr 15 22:35:44 UnRaid kernel: FS: 0000000000000000(0000) GS:ffff88982ce40000(0000) knlGS:0000000000000000 Apr 15 22:35:44 UnRaid kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Apr 15 22:35:44 UnRaid kernel: CR2: ffff88982491f000 CR3: 0000001824e78000 CR4: 00000000003406e0 Apr 15 22:35:44 UnRaid kernel: Call Trace: Apr 15 22:35:44 UnRaid kernel: ? md_open+0x2c/0x2c [md_mod] Apr 15 22:35:44 UnRaid kernel: ? kthread+0x10c/0x114 Apr 15 22:35:44 UnRaid kernel: rewind_stack_do_exit+0x17/0x20 Apr 15 22:35:44 UnRaid kernel: ---[ end trace 10a9007de2546ba6 ]--- Apr 15 22:35:51 UnRaid kernel: XFS (sdp1): Unmounting Filesystem Apr 15 23:01:42 UnRaid emhttpd: shcmd (5654): /usr/sbin/hdparm -y /dev/nvme1n1 Apr 15 23:01:42 UnRaid root: HDIO_DRIVE_CMD(standby) failed: Inappropriate ioctl for device Apr 15 23:01:42 UnRaid root: Apr 15 23:01:42 UnRaid root: /dev/nvme1n1: Apr 15 23:01:42 UnRaid root: issuing standby command Apr 15 23:01:42 UnRaid emhttpd: shcmd (5654): exit status: 25 Sometimes, the system will partially fail, in that disks will become inaccessible or the UI will flake out, but it's sometime possible to shutdown from command line. In other cases, the system freezes, with blinking keyboard lights and only a hard reset will do. Switching to safe mode makes the problem go away. I have attached a diagnostics archive for both the crash-state system, and for the working safe-mode system. WHAT HAVE I TRIED? The following have had no impact; Upgrade from system 6.8.2 to 6.8.3 Disabled both Docker & VMs (individually & together) Updated all plugins to current version Optimised bios for stability (disable c-states, reduced memory frequency and a few other bits & bobs) Removed some smaller drives from array & new config Removed some PCIe devices *WHAT INSTIGATED ALL THIS? I made a goodly number of changes to my system over the weekend including adding and shuffling PCIe devices, and adding extra RAM, some rewiring etc. On reboot, my dual parity drives disappeared and, when I resurrected them, the system called for a parity sync. I started this, but during the sync, I noticed one of my array drives was offline in a couldn't be mounted / needs to be formatted state. It was a superblock issue on XFS (input/output error) and no amount of coxing could get it back. Fortunately, I could mount the drive and rsync the contents off to a spare unassigned device. During this process, my USB key also showed up as blacklisted, so in the space of a few hours, I lost parity, one of my array drives and my system 😞 I got USB back through a windows repair, and with the bad disk contents saved, tossed the offender out of the array (I tried all the XFS repair stuff, a complete disk Zero via DD and some other things, but I could not get that drive to format again.). I transferred the saved files back to one of the other array disks and got all my data back online. I then re-added the parity disks and started the sync. Throughout all of this, I was getting frequent crashes as described, and only when my 3rd parity sync failed, did I move to safe mode. I've no got the array working and parity in place. I just need to figure out how I can get stability back with my plugins enabled. Thanks for reading. unraid-diagnostics-20200416-1825_safemode_stable.zip unraid-diagnostics-20200415-2332_plugins_crash.zip
  20. Hi All I have an RX 570 in my system that was the primary GPU (of 2) and had been used successfully for alternative booting a WIndows 10 or MacOS VM. This weekend I had occasion to shuffle around my system innards, and, in the process, added a new primary GPU, moving this one to a different slot. (also 16x) However, now when I boot a VM (either Win10 or OSX), the GPU fan immediately stops spinning. The card will eventually start overheating and glitching after a few minutes, necessitating a VM shutdown. The fans don't start up again until a full system reboot. Any thoughts on why a GPU fans would syop spinning when a VM boots? especially since this card has been working percety with the same VMs and no XML config changes at all.
  21. Enjoyed that a lot, and now I have yet another podcast/chat channel to while away the hours with as I explore the guys back catalog. @jonpOne point: I have encountered exactly the same issue as Mike described with a multi-drive BTRFS cache setup. A hard reset would corrupt the whole cache and no amount of scrubbing or recovery would bring it back. I've downgraded to a single cache with XFS as it was just too unreliable. Looking forward to multiple pools! Now, I'm off to do some tweeting to try to win one of those case badges.....
  22. I have the gd07 and it's a brilliant case. The removable drive cage is a great feature. There are a few downsides. It can be tight to work in, and doing any after-build changes usually requires removal of all the drives and a good deal of pfaffing about. Also, you can't really access or use any front USB ports because of the front flap. Not a deal breaker, but something to be aware of. Apart from that though, I like it a lot. Great flexibility, decent cooling and reasonably well built.
  23. I believe that you can make non-destructive changes using the virt-manager docker container.
×
×
  • Create New...