thedudezor

Members
  • Posts

    33
  • Joined

  • Last visited

Everything posted by thedudezor

  1. This solved the issue, thank you. <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source dev='/dev/disk/by-id/ata-CT4000MX500SSD1_2251E6961C0F' index='1'/> <backingStore/> <target dev='hdd' bus='virtio'/> <alias name='virtio-disk3'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </disk>
  2. Yes, it boot loops I suspect when trying to start the VM's. I say this as I can see it finish booting on the monitor and get to the login prompt. I can even log in to the webui if I am fast enough.
  3. Not sure what exactly to say here. I am trying to add a 2nd pass through SATA SSD (/dev/sde) to an existing / known working config. With the server powered on, I connect the SSD and the device will show up in Unraid as /dev/sde and I'm able to edit the settings of the disk to indicate I would like to pass it through to the VM. Then I am able to add the 2nd vDisk Location, set it to manual and provide the location of "/dev/sde". Boot the VM and able to work with the disk without any issues. However, if I reboot the Unraid server (not the VM, the VM can be rebooted without issue) and leave the disk configured to the VM, the unraid server will boot loop over and over until I remove it. No ideas on why this is happening. Attached is my diag log if anyone has some pointers on what else I could try here. VM config, no 2nd drive <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' id='2'> <name>Mint</name> <uuid>3b3b4c86-9f52-2a22-b98c-7721fab88048</uuid> <metadata> <vmtemplate xmlns="unraid" name="Linux" icon="linux.png" os="linux"/> </metadata> <memory unit='KiB'>13107200</memory> <currentMemory unit='KiB'>13107200</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>12</vcpu> <cputune> <vcpupin vcpu='0' cpuset='10'/> <vcpupin vcpu='1' cpuset='26'/> <vcpupin vcpu='2' cpuset='11'/> <vcpupin vcpu='3' cpuset='27'/> <vcpupin vcpu='4' cpuset='12'/> <vcpupin vcpu='5' cpuset='28'/> <vcpupin vcpu='6' cpuset='13'/> <vcpupin vcpu='7' cpuset='29'/> <vcpupin vcpu='8' cpuset='14'/> <vcpupin vcpu='9' cpuset='30'/> <vcpupin vcpu='10' cpuset='15'/> <vcpupin vcpu='11' cpuset='31'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-q35-6.2'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/3b3b4c86-9f52-2a22-b98c-7721fab88048_VARS-pure-efi.fd</nvram> <boot dev='hd'/> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='6' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source dev='/dev/nvme1n1' index='1'/> <backingStore/> <target dev='hdc' bus='sata'/> <alias name='sata0-0-2'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <alias name='usb'/> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <alias name='usb'/> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <alias name='usb'/> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0xb'/> <alias name='pci.4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0xc'/> <alias name='pci.5'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0xd'/> <alias name='pci.6'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/> </controller> <controller type='pci' index='7' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='7' port='0xe'/> <alias name='pci.7'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='sata' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'> <alias name='pcie.0'/> </controller> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x8'/> <alias name='pci.1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x9'/> <alias name='pci.2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0xa'/> <alias name='pci.3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:a0:f4:20'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio-net'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/0'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/0'> <source path='/dev/pts/0'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-2-Mint/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'> <alias name='input0'/> </input> <input type='keyboard' bus='ps2'> <alias name='input1'/> </input> <audio id='1' type='none'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0e' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0e' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x11' slot='0x00' function='0x4'/> </source> <alias name='hostdev2'/> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </source> <alias name='hostdev3'/> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x046d'/> <product id='0x082c'/> <address bus='3' device='3'/> </source> <alias name='hostdev4'/> <address type='usb' bus='0' port='1'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x0764'/> <product id='0x0501'/> <address bus='7' device='6'/> </source> <alias name='hostdev5'/> <address type='usb' bus='0' port='2'/> </hostdev> <memballoon model='none'/> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain> atlantis-diagnostics-20230317-1627.zip
  4. Helpful as always @Squid! Just to be clear, it sounds like I must have at minimum a single drive allocated into the array? Since VM's / dockers etc.. require the array to be running, the minimum is that single (basically throw away) drive. Just checking since I have never attempted to run unraid with only a single data drive, no parity drive etc..
  5. A few months ago I migrated my spinning rust array over to a old 2012 build that I had due to hard drive thermal issues occurring during parity checks. I've been going back and forth on trying to find a new case that can both support my newer hardware AND roughly 6-8 platter drives. It seems the only way to get both is to use vintage cases. Anyways, I'm starting to miss the VM and docker functionality (well its not missing, its just silly slow running on a old machine) and that got me thinking can I keep the spinning rust as a "storage" machine and spin up a new Unraid instance that has NO traditional hard drive array, only SSD pool and pass-through nvme's? Thoughts, comments?
  6. Thanks for the added inspiration @tjb_altf4 Yes I'm pretty sure I viewed that video already but as with the case you have in your build, its a slightly different. I wanted to go with the version that has the 3x 5.25" bays populated with the hot swap trays already vs going with the model you have (and is in the video you posted) that would require me to buy the hotswap backplane and swap them out (probably another $400 in added cost too). But.... that version has the fan wall I could install my rad onto. Ugh.. I wish they would include the fan wall on the other version also.. Anyway. Not that I need that many drives, the sick drive cage you made almost makes me want to rethink going with the other version in case you decided to release the CAD files It's a pretty close replica of the 45drives enclosure that I was looking at after finding out that is what LTT was using for their Unraid servers.... Until I got the quote from them at ~$1,500 just for the bare case.
  7. The Rosewill 4u I am looking at only has a support bracket in the center, no fan wall behind the HDD bay. Is this the same as what you have and if so, how did you mount the rad in the case? I'm sure it will have plenty of space since I'm only running a ATX sized mobo so that wouldn't be a issue. The AMD 5700XT I think sticks out another 2 or 3 inches path the mobo, but even then I should have more then enough space to place the rad into the case before it hits the back of the HDD backplane / fans.
  8. I've been running Unraid as my primary desktop / server for over a year now and I really enjoy many aspects of it, however recently I upgraded and added a additional spinning rust to my lian li o11 dynamic enclosure and the drive temps are just not coming down even after adding all the fans that I could. Yeah, I know this is a horrible case for spinning rust drives, but when I started the build I never intended to even put mechanical drives in it at all, just nvme's and a few SSD's. I wanted it to be a modern desktop PC. lol... So at this point I have migrated the "array" part of my unraid build to a old build I had laying around (Corsair 500r, 8+ year old C2D CPU etc..). While the case is actually almost perfect for a Unraid build compared to my o11 dynamic and I even had a 5.25" to 3x 3.5" hot swap backplane that keeps the drives cool already. At the end of the day, the hardware is junk for trying to run VM's and dockers, a part of unraid I enjoy the most. As for the "desktop", I had this VM's nvme drive passed through, I was able to just pull the Unraid USB drive out and select the nvme drive as the new boot drive and it just booted right up with out any issues at all. Seriously gotta love that. As a result, I am now at a crossroads as to what to do and hence the reason I am posting here to see what you all think. Option 1: Upgrade the old build to a 5700x allowing me to now run VM's and dockers on the unraid server, but keeps my "desktop" as a traditional bare metal machine. But I feel kinda bad about this considering the desktop is WAY overkill in terms of CPU / other I/O (3950x, 32GB Ram, 2x Nvme's, 3x SSD's). Option 2: Don't upgrade the old machine, keep it as a spinning rust file server considering the hardware is find for that, but then unraid my other "desktop" machine. In this configuration however I would have no interest in the array part at all, but can you even start VM's and docker with out any disks in the array? I know you can add SSD's to a drive cache pool, but don't you still need disks populating the array? Its almost like I want a high speed nvme / ssd only version of unraid that focused on the virtualization part. Not sure unraid is even still the right fit anymore for that. Option 3: Thinking of picking up a Rosewill RSV-L4412U 4u case, then migrate everything back into a single machine. The only challenge here is that I have a 360mm watercooled AIO for the CPU that might be interesting trying to fit / fabricate a bracket for, but if successful would give me the spinning rust storage with proper cooling and the ability to run it on my newer hardware for VM / dockers.
  9. Background: I am planning a trip for the next several weeks and would like to take my workstation VM image with me and install it on a different machine during my travel. Considering my workstation is just a vdisk image and as far as I am aware, Unraid is just running qemu, I thought this would be easy. So I installed: qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils virtinst virt-manager Then took a copy of my vdisk1.img and attempted to boot it in virt-manager with no luck. The VM has a bios of i440fx-4.2 and virt-manager only has i440fx as a option, so that could be a issue I guess. I also suspect that EDK2 (tianocore) is also holding me back here. I see plenty of posts etc.. about trying to run Unraid in a VM, but have yet to find any documentation about how to run a VM created in unraid somewhere other then in unraid. I guess I could just buy another key and use that for 3 weeks while I am away from my server, but looking to avoid that since I will have little to no use for it long term.
  10. I knew I forgot something else. Funny thing is, I must have looked at it when I was trying to fix it, but it was set to auto so I figured it would not use SSL if it couldn't resolve.. Thank you!
  11. so I ended up rebooting and selected the webui mode on boot. Was able to get in and uninstall unraid.net app, however my server is still trying to resolve to url [pointer].unraid.net.
  12. I installed the unraid application and registered my server with unraid.net. After that, the only way to get to the server was via the url [pointer].unraid.net (going to the local IP no longer is allowed). I can still SSH of course, however since my internet decided to die this afternoon, I'm left with a somewhat interesting issue.. I need to pass through a USB wifi dongle to a VM, but can't since the ui is not accessible lol.. Any ideas here?
  13. I hope so. This big navi hardware reset bug makes me ready to switch to team green (assuming supply issues return to normal at some point). lol
  14. I think this post can be tagged as solved. I think based on everyone's support here I have everything running well now and is stable. Thank you everyone for your help as always!
  15. Wow. Yes, I completely misunderstood adding a new pool for adding the drive to the cache pool. Had I looked at the release thread, I would have realized that we can now add SSD's to a pool (not just the cache only pool). Makes complete sense why its raid0, so like you said, even though the 1TB SSD is in the cache pool, its restricted to the smallest member of that pool (the 240GB drive). It is somewhat misleading considering the WebUI shows total size of that cache pool to be 1.2TB when it should be shown as the actual usable 240G? Yes I know the parity is invalid, the rebuild to the new larger parity drive is still ongoing and I am not going to make any further changes until that process is completed. I need access to the VM's that got nuked in the btfs error'ed drive (that I can backup from the array disks), but I refuse to do anything until that rebuild is done. I do not have another SATA port on my mobo, so that is why I didn't just add the new parity drive while the other parity was still installed.
  16. Attached. universe-diagnostics-20200910-1001.zip
  17. Here is what I have done so far. - I removed the overclocked RAM profile in the bios. - A reboot seems to have auto-magically corrected the missing share issue, the user not allowed issue and I am no longer seeing the virtual nic bouncing in and out of promiscuous mode endlessly in the logs. - I already had planned on upgrading the parity drive to a lager one so I could then re-use the old parity drive in the array as I need more space (old config was, 4TB parity, 3x 2TB, planning to move this to 8TB parity with 2x 2TB and 1x 4TB). Once the rebuild is done, I will swap out one of the 2TB drives with the old 4TB drive. - I found that the disk2 drive (sdg) had possibly not been fully seated into the hotswap bay. (For some reason I used different screws on this drive caddy. Go figure....) I will clear the CRC errors and monitor and / or maybe just replace this drive anyway due to upgrade mentioned above. - I wanted to try a cache pool as a possible option since my other SSD was fubar'ed with btfs errors. I added that and it's almost completely full (1.2TB) now even though I have only appdata, system and a "temp" user share with hardly any data in it set to preferred... Not sure how to undo this now? I almost don't even want a cache drive. I'm OK with having spinning disk array speeds for anything other then "temp" VM to VM file sharing (small, sub 10GB files). Not sure how screwed up things will get if I pull that newly added 2nd drive back out and just use it as a UD drive again. - I will start shutting down via first stopping the array, waiting for that to confirm, then shutdown. So far this has not been a issue, but like I said, a unclean shutdown wouldn't happen all the time anyway.
  18. 1. Got it. I'll have to check the cable / connection and monitor this. Should have known better, thank you. 2. I get the unclean shutdown would cause a check. I'm shutting the server down the same I always would (shut VM -> wait, then use the webUI -> shutdown). Not sure why *recently* this is become a problem. Like I said, its not every time I shutdown, just has for some reason become more frequent. I guess I'll try adding more wait time after the VM is down?.?.. 3 & 4. Appdata, System and domains are all set to use cache. My "main" VM (the one that locks up when a system invoked parity check runs) is on the UD nvme. So the vdisk should not be a member of the disks getting checked? - As for these even sitting on the array in the first place,I stopped docker and moved appdata, so that fixed? As for domains, I wanted to allow disks to be created on the array considering the cache drive is small (250GB) and for the most part, after I create the VM, i plan on moving the file to a UD SSD or nvme. - The system folder / files I have no idea how to resolve that. Can someone point me to a wiki guide on how to know what folders / files can be safely moved to the user mount? - As for creating a cache pool, I would need to read more about this as being a option. I would agree that I think it makes sense to pool my two SSD drives as they have similar (even though the smaller drive is a SATA 3gig vs SATA 6gig link as I recall) speed characteristics, however the nvme drive is substantially faster tiered device. Due to that, I think I will always just use UD and tie that disk to my main VM. - Specifically on 4 (the reported drive errors), the cache drive is not reporting the btfs errors, the UD sdc1 device is. I attempted to recover based on the guide posted and it looks like I can get 1 of the 3 VM vdisks back. Not sure about the integrity of that vdisk, so I''m considering all of the data on this disk a loss. So I guess I need to format the sdc1 disk anyway, so maybe giving the cache drive pool a try is a good option? 5. Not much else I can help with here. The same username / pass has been used for a long time now. I can see the main domain folder, but it is not accessible. Even the public / exportable shares are not displayed. I would say I can test with another known machine.. But my other VM's are missing now due to #3 & 4 listed above. lol I have not tried a reboot, so I hope this auto-magically corrects it's self. [EDIT: A reboot did fix this] 6. Not sure what else to help with here, other then It just shows that almost endlessly in the log. General - I apparently didn't realize 3600MHz was a overclock. Pretty sure the mobo I have defaulted to this speed using the baked in memory profile. I'll dial that back to 3200 and see if things improve. Again, thank you.
  19. Been fairly happy with my unraid solution, however recently I started to see some issues that might be forcing me to rethink using this as my daily workstation / enhanced NAS box all in one. 1. Recently noticed that disk2 (sdg) looks like it has some CRC errors on it. - I have a new drive getting delivered today. - The plan was to remove the current parity drive (4TB) and install the new 8TB drive, let the parity re-sync, then reuse the current parity drive as drive 2's (sdg) replacement. 2. Past week or so now parity checks are running (scheduled for monthly) almost every other reboot now (I don't keep this online 24/7). I have disabled the checks for now considering #3 on this list 3. Same time period as #2 (seem to be related) Parity checks now cause my VM to become unresponsive whenever the VM's screensaver turns on. If I invoke the parity check manually, the VM is fine... Its only when the system run's it. Since I was so smart when building this system and decided to get a AMD 5700.... The system requires a complete reboot of the box every time this happens (thanks AMD, you.... ). Since this box is my main machine, this becomes very annoying very quickly. 4. My unassigned SSD (sdc) is just flowing with btrfs errors and gets put into read only mode. Lucky for me, my main VM is on nvme0n1 so I'm kind of still up and running, but a few of my other VM's vdisks are now completely missing on this drive from the looks of it. Even if they are not recoverable, its not exactly the end of the world as I do have backups on the array (assuming that also is not completely messed up at this point). 5. Shares are no longer accessible. Says my user is not allowed, so I can not even browse any of my files anymore. 6. The network device br0 keeps going in and out of promiscuous mode. No idea what is causing that. Is my system doomed and should I jump from this sinking ship? universe-diagnostics-20200909-1153.zip
  20. I'm sure I'm just missing something (most likely VERY simple) here, but for the life of me I can not figure out why this is a issue. The backstory here is that I have been getting into Arch recently and wanted to start playing around with different window manager / other package configs etc.. On paper I figured, hey why not do the base install with the bare bones minimal packages installed, then make a copy that "base" install vdisk image and point that copy to a new VM so I can test out the various different configurations without having to redo the entire install each time. I have been doing this with my Windows VM's and Mint daily driver VM (also uses grub efi boot) now for months. I will copy the vdisk file to a share (I actually copy it twice and one of them is stored on a faster tier SSD as a "hot spare") in the event that if I somehow nuke a VM and I can create a new one, no problem. Just copy the vdisk over and it boots with out a issue. However, with the Arch VM, as soon as I attempt to boot off the cloned vdisk image, the efi partition just seems to be missing? Its a copy of the exact same vdisk I can boot up from just fine in the original VM it was created in, but as soon as that vdisk is moved, when I go into the bios, the grub efi is not a option at all. The only thing I can think of is that the virtualized bios has a different hardware ID and that efi is some how limited to that specific unique ID? To be clear here, I CAN just use the same VM instance and just move the vdisk around to the cloned versions, but it just seems strange to have to be confined to ONLY the VM the install was done on when in theory it should be portable, especially considering I am doing it today with my Windows and Mint VM's currently.
  21. Hmm. Yes, a reboot allowed the disk to mount and am running a check now. Thank you again!
  22. Yes I unmounted from UD, selected disk 3 (all in blue as you expected), then check the "parity is already valid" and started the array. New daig attached. universe-diagnostics-20200425-1236.zip
  23. Ohh.. I get a error "Unmountable: No file system" on the disk #3 now only option is to format it?
  24. OK. The data not in lost and found looks to be OK from what I can tell. I do have some files in the lost+found, but nothing important. Will give it a try. Thank you again for your help here. Greatly appreciated and have a good rest of your weekend!
  25. xfs_repair -v /dev/sde1 That worked and was able to mount and view files / folders on that drive.