• Posts

  • Joined

  • Last visited

  • Days Won


Everything posted by craigr

  1. It's still got well over 14 hours until the rebuild is finished so hopefully by then I can be certain.
  2. Really, I just want to be certain my current parity is OK since I paused and wrote during a parity rebuild. I'm 99.99999% sure it is, I'd just really appreciate a developer commenting on this one. Thanks for your help!
  3. Not if I restore my flash to its prior config and replace the original previous two drives. I can't lose any data because the data is still on the original discs and can be used to rebuild a bad disc if that happens.
  4. That was my reasoning, but in this case since I already had had parity it was not faster. The parity (all though not completely rebuilt yet and with rebuild paused) and all other drives were still written to when the data was transfered to the array. Logic would dictate I am safe I think... I just want to be absolutely certain.
  5. Also worth noting, I am doing two drives at a time with dual parity. Before beginning each dual drive swap, I am backing up my flash drive so that if a drive fails during rebuild, I may revert to the prior config and replace the new drives back with their old drives which still contain valid data. This way I am staying safe even if I lose a drive. I realize it's not the absolute best, but it's pretty sound and I have 14x 14 TB drives to swap so it's a huge time saver.
  6. I think I am OK, but since all my data could be at risk, I want to make extra sure. I replaced both the parity drives in my machine migrating from 8TB to 14TB. During the parity rebuild I needed to write data to the array. I have COVID and my brain is far from 100%. I paused the parity rebuild while I wrote the data to the array thinking it would be faster as with a new array and no parity. Of course, this was not correct because while writing the data to the array all hard drives in the array including the new parity drives were actively being written to. Along with the fact that I have confidence unRAID would be "smart" enough to deal with this, the fact that all discs active tells me my parity is sound. Am I OK or is it prudent to resync parity again (sigh) when it finally finishes? I am trading out 12 more drives and don't want to rewrite them with corrupt parity data and basically lose everything! Thanks for confirmation (hopefully), craigr
  7. No issues with 6.11.4 to 6.11.5 including dockers and VM's. I am not running PLEX now though.
  8. Upgraded from 6.11.3 seemingly without issue. All VM's and dockers running. Cache pools still present 😉 Thanks guys.
  9. Thanks man. I've got the 200mm Noctua NF-A20 PWM in the side as well blowing air in. I held out for two years waiting for Noctua to release a grey Redux version, but finally gave in several weeks back when my existing fan started failing. I also can't believe I spent over $30 on red rubber bits, but I'm in the mood to bring joy to myself through consumerism 🤑
  10. This is definitely NOT how I did it because my computer never went to sleep. Also, the file dates for my v-bios are a month older than this video.
  11. I don't remember, but it may very well have been GPU-Z. I'm considering getting a new video card, but honestly, I may not want to go through figuring this out again. I watched Spacinvader's video, read comments, found threads in the unRAID forum and Reddit, and eventually managed to get the BIOS's off two cards. It was not at all by the book. I don't think this is the Spaceinvader video I watched... I just found this which might be very helpful. You could boot into a ubuntu live USB and do it maybe: I can't seem to find Spaces older video....
  12. Forgot to add that I custom built all the power cables for all the hard drives and bays. I use 16 AWG pure copper primary wire. It's fun to keep all the black wires in the correct order and not fry anything, but I proved it doable 🙂. Having a modular power supply is nice. I was able to split 10x "spinners" per power cable in order to stay within the amperage limits of the modular power connectors and maintain wire runs without significant voltage drop. Both power wires for the POD's are in one PET expandable braid sleeving finished off with shrink tube on both ends. They breakout and split between the PODS and go up and down (look between PODs two and three in the pics and you can see). They also breakout and split to two ports on the power supply. The SATA power wires I braided for fun. First time I ever did a four-wire braid.
  13. The back of your PODS (cages) loos entirely different?
  14. I use Norco SS-500 drive cages. These look similar but are maybe knock offs? Here are the Norco's: craigr
  15. I am also interested in which hard drive pods, cages, or whatever you want to call them are. They look similar to my Norco PODS, but they are not the same ones.
  16. Looks like a nice budget case that will keep drives cool. Grat build that will grow with you as you add drives. Nice.
  17. Never thought I would really care, but lately I've been dolling up the server. The original build started in a different case around 14 years ago, but the server has resided in this Xigmatek Elysium for ~10 years. Hardware changes all the time it seems. I'm currently swapping out all but five of the 8TB drives for 14TB and 12TB WD Red's. My current hardware is below but is usually up to date in my signature. Server Hardware: SuperMicro X11SCZ-F • Intel® Xeon® E-2288G • Micron 64GB ECC • Mellanox MCX311A • Seasonic SSR-1000TR • APC BR1350MS • Xigmatek Elysium • 4x Norco SS-500's Hard Drive PODs • 11x Noctura Fans. Array Hardware: LSI 9305-24i • 136TB on WD Helium RED's • 19x WD80Exxx • Cache: 1x WD100Exxx • Pool1: 2x Samsung 1TB 850 PROs RAID1 • Pool2: 2x Samsung 4TB 860 EVOs RAID0. Dedicated VM Hardware: Samsung 970 PRO 1TB NVMe • Inno3D Nvidia GTX 1650 Single Slot. Forgot to add that I custom built all the power cables for all the hard drives and bays. I use 16 AWG pure copper primary wire. It's fun to keep all the black wires in the correct order and not fry anything, but I proved it doable 🙂. Having a modular power supply is nice. I was able to split 10x "spinners" per power cable in order to stay within the amperage limits of the modular power connectors and maintain wire runs without significant voltage drop. Here are some pics... 10GB fiber goes from the Mellanox MCX311A to a Brocade 6450 switch (finally ran the fiber over the weekend). That feeds the house and branches off. 5GB comes into the 6450 switch from my modem and I typically get around 3GB (+/-0.50GB) from my ISP. It's a main line directly to my server from which I run VM's and use as my primary workstation. I really love unRAID and how easy virtualization is. craigr
  18. You sure did score some very nice hardware and make great use of it. The CSE-846 is one of my dream cases, but I don't have the space. 256GB is a lot especially if you want to get power usage down. Unless you are running loads of VM's and dockers you are probably fine with 64GB or even 32GB with moderate virtualization. I'm well under 100 watts idle and still under 300 watts all out. Really nice build though.
  19. So yeah, this is inside my OVFM BIOS setup boot manager. Why two Windows Boot Managers when I don't think I even really need one because I'm pretty sure there is also a windows boot manager on the SSD itself. If I highlight the SSD and press enter, I boot directly into windows 11.
  20. Originally I had a Windows 10 VM setup to run on my NVMe bare-metal using spaces_win_clover.img. I then upgraded to Windows 11 and eliminated the need for spaces_win_clover.img (I think it was not compatible with Windows 11 or maybe I just didn't want to use it anymore because it was no longer needed in unRAID). However, after I did that I could no longer get my Windows 11 VM to boot properly or consistently. This is the thread where I sort of sorted it out or at least got my VM working again: With the release of unRAID 6.11.2 and 6.11.3 I have encountered the problem all over again. With this line in my xml file the VM will boot properly every time, the key line being <boot dev='hd'/> <os> <type arch='x86_64' machine='pc-i440fx-7.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi-tpm.fd</loader> <nvram>/etc/libvirt/qemu/nvram/1a8fdacb-aad4-4bbb-71ea-732b0ea1051a_VARS-pure-efi-tpm.fd</nvram> <boot dev='hd'/> I have completely removed the VM (not the VM and disk, just the VM) and started over. I have assigned the NVMe to boot order 1 which adds the line for boot order in the xml file <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x08' slot='0x00' function='0x0'/> </source> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x0c' function='0x0'/> </hostdev> When I do that, there is no line <boot dev='hd'/> generated by the template. Here is where it gets really weird. Only on the first boot after doing this will OVMF finish loading and boot Windows 11. If I shut down or restart Windows, OVMF will freeze and I cannot boot again until I go back, edit the xml to remove the boot order line, and restore the <boot dev='hd'/> ?!? When I boot with <boot dev='hd'/> this is what I get: I can also press esc and enter the OVMF BIOS. I have tried removing the TWO windows boot managers that are in there and put the NVMe as the first boot device and saved (I've done this like 20 times). However, every time I go back into the OVFM BIOS all the boot options are back as if I never deleted or reordered them 😖. On my first boot with an xml containing <boot order='1'/> I get the exact same above screen and can enter the OVFM Bios with esc. But as stated, once I shutdown Windows and reboot I will only get the first "Windows Boot Manager," OVFM will not respond to esc, and it just freezes there. Finished. Done. I have to go back to the xml, remove boot order and restore boot dev=hd. Why do I have two windows boot managers, why can't I delete them, why can I boot fine directly from my NVMe when I am in the OVFM BIOS and launch there, why can't I use boot order in my xml and boot more than once? Here is my entire current xml that boots: <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' id='2'> <name>Windows 11</name> <uuid>1a8fdacb-aad4-4bbb-71ea-732b0ea1051a</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 11" icon="windows11.png" os="windowstpm"/> </metadata> <memory unit='KiB'>38273024</memory> <currentMemory unit='KiB'>38273024</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='4'/> <vcpupin vcpu='1' cpuset='12'/> <vcpupin vcpu='2' cpuset='5'/> <vcpupin vcpu='3' cpuset='13'/> <vcpupin vcpu='4' cpuset='6'/> <vcpupin vcpu='5' cpuset='14'/> <vcpupin vcpu='6' cpuset='7'/> <vcpupin vcpu='7' cpuset='15'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-7.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi-tpm.fd</loader> <nvram>/etc/libvirt/qemu/nvram/1a8fdacb-aad4-4bbb-71ea-732b0ea1051a_VARS-pure-efi-tpm.fd</nvram> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <hyperv mode='custom'> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='4' threads='2'/> <cache mode='passthrough'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/pool/ISOs/virtio-win-0.1.225-2.iso' index='1'/> <backingStore/> <target dev='hdb' bus='ide'/> <readonly/> <alias name='ide0-0-1'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='qemu-xhci' ports='15'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='ide' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:29:50:c9'/> <source bridge='br0'/> <target dev='vnet1'/> <model type='virtio-net'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/0'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/0'> <source path='/dev/pts/0'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-2-Windows 11/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'> <alias name='input0'/> </input> <input type='keyboard' bus='ps2'> <alias name='input1'/> </input> <tpm model='tpm-tis'> <backend type='emulator' version='2.0' persistent_state='yes'/> <alias name='tpm0'/> </tpm> <audio id='1' type='none'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <rom file='/mnt/pool/vdisks/vbios/My_Inno3D.GTX1650.4096.(version_90.17.3D.00.95).rom'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x00' slot='0x1f' function='0x3'/> </source> <alias name='hostdev2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x00' slot='0x14' function='0x0'/> </source> <alias name='hostdev3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x00' slot='0x1f' function='0x5'/> </source> <alias name='hostdev4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x08' slot='0x00' function='0x0'/> </source> <alias name='hostdev5'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/> </hostdev> <memballoon model='none'/> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain> PLEASE SOMEBODY HELP ME. I HAVE SPENT HOURS!!!
  21. Well, I updated from 6.11.1 to 6.11.3 and the problem did not reoccur. My cache disks are all assigned and intact. I have rebooted multiple times to test and it's all fine. Something with the upgrade between 6.11.2 to 6.11.3 obviously went sideways that did not happen after I reverted back to 6.11.1 and upgraded from there. Oh well. Now if I could only get my Windows 11 VM to boot more than once with boot order enabled... but I have a work around for that so I can live with it for now. Thanks, craigr
  22. I can try 6.11.3 again later when I have more time and get the log files. Right now, I just need a functioning computer.
  23. I don't know. I've been using unRAID for over ten years and have had very few problems with complicated setups running VM's and dockers. I use unRAID on many of my client's systems who pay me to build them servers for their home theaters. I have found support to be quite good and developers have stepped in when necessary. My experience has been great with only a hiccup here and there, but what I am doing is complex and I would expect this.
  24. Yes, my pool devices were assigned again (thank the maker). I did of course have a backup from 6.11.1. I recreated the flash drive from scratch using a fresh download of 6.11.1 and copied over my config directory. Funny thing too is that my SSD cache pools use the SATA ports on the MB and the cache drive uses a port on my LSI controller. Why lose ALL cache drives?!?
  25. I just tried to upload my hardware configuration with hardware profiler and that failed, "Sorry, an error occurred. Please try again later." However, all of my hardware is in my signature.