thedudezor

Members
  • Posts

    33
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

thedudezor's Achievements

Noob

Noob (1/14)

1

Reputation

  1. This solved the issue, thank you. <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source dev='/dev/disk/by-id/ata-CT4000MX500SSD1_2251E6961C0F' index='1'/> <backingStore/> <target dev='hdd' bus='virtio'/> <alias name='virtio-disk3'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </disk>
  2. Yes, it boot loops I suspect when trying to start the VM's. I say this as I can see it finish booting on the monitor and get to the login prompt. I can even log in to the webui if I am fast enough.
  3. Not sure what exactly to say here. I am trying to add a 2nd pass through SATA SSD (/dev/sde) to an existing / known working config. With the server powered on, I connect the SSD and the device will show up in Unraid as /dev/sde and I'm able to edit the settings of the disk to indicate I would like to pass it through to the VM. Then I am able to add the 2nd vDisk Location, set it to manual and provide the location of "/dev/sde". Boot the VM and able to work with the disk without any issues. However, if I reboot the Unraid server (not the VM, the VM can be rebooted without issue) and leave the disk configured to the VM, the unraid server will boot loop over and over until I remove it. No ideas on why this is happening. Attached is my diag log if anyone has some pointers on what else I could try here. VM config, no 2nd drive <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' id='2'> <name>Mint</name> <uuid>3b3b4c86-9f52-2a22-b98c-7721fab88048</uuid> <metadata> <vmtemplate xmlns="unraid" name="Linux" icon="linux.png" os="linux"/> </metadata> <memory unit='KiB'>13107200</memory> <currentMemory unit='KiB'>13107200</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>12</vcpu> <cputune> <vcpupin vcpu='0' cpuset='10'/> <vcpupin vcpu='1' cpuset='26'/> <vcpupin vcpu='2' cpuset='11'/> <vcpupin vcpu='3' cpuset='27'/> <vcpupin vcpu='4' cpuset='12'/> <vcpupin vcpu='5' cpuset='28'/> <vcpupin vcpu='6' cpuset='13'/> <vcpupin vcpu='7' cpuset='29'/> <vcpupin vcpu='8' cpuset='14'/> <vcpupin vcpu='9' cpuset='30'/> <vcpupin vcpu='10' cpuset='15'/> <vcpupin vcpu='11' cpuset='31'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-q35-6.2'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/3b3b4c86-9f52-2a22-b98c-7721fab88048_VARS-pure-efi.fd</nvram> <boot dev='hd'/> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='6' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source dev='/dev/nvme1n1' index='1'/> <backingStore/> <target dev='hdc' bus='sata'/> <alias name='sata0-0-2'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <alias name='usb'/> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <alias name='usb'/> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <alias name='usb'/> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0xb'/> <alias name='pci.4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0xc'/> <alias name='pci.5'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0xd'/> <alias name='pci.6'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/> </controller> <controller type='pci' index='7' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='7' port='0xe'/> <alias name='pci.7'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='sata' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'> <alias name='pcie.0'/> </controller> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x8'/> <alias name='pci.1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x9'/> <alias name='pci.2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0xa'/> <alias name='pci.3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:a0:f4:20'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio-net'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/0'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/0'> <source path='/dev/pts/0'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-2-Mint/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'> <alias name='input0'/> </input> <input type='keyboard' bus='ps2'> <alias name='input1'/> </input> <audio id='1' type='none'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0e' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0e' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x11' slot='0x00' function='0x4'/> </source> <alias name='hostdev2'/> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </source> <alias name='hostdev3'/> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x046d'/> <product id='0x082c'/> <address bus='3' device='3'/> </source> <alias name='hostdev4'/> <address type='usb' bus='0' port='1'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x0764'/> <product id='0x0501'/> <address bus='7' device='6'/> </source> <alias name='hostdev5'/> <address type='usb' bus='0' port='2'/> </hostdev> <memballoon model='none'/> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain> atlantis-diagnostics-20230317-1627.zip
  4. Helpful as always @Squid! Just to be clear, it sounds like I must have at minimum a single drive allocated into the array? Since VM's / dockers etc.. require the array to be running, the minimum is that single (basically throw away) drive. Just checking since I have never attempted to run unraid with only a single data drive, no parity drive etc..
  5. A few months ago I migrated my spinning rust array over to a old 2012 build that I had due to hard drive thermal issues occurring during parity checks. I've been going back and forth on trying to find a new case that can both support my newer hardware AND roughly 6-8 platter drives. It seems the only way to get both is to use vintage cases. Anyways, I'm starting to miss the VM and docker functionality (well its not missing, its just silly slow running on a old machine) and that got me thinking can I keep the spinning rust as a "storage" machine and spin up a new Unraid instance that has NO traditional hard drive array, only SSD pool and pass-through nvme's? Thoughts, comments?
  6. Thanks for the added inspiration @tjb_altf4 Yes I'm pretty sure I viewed that video already but as with the case you have in your build, its a slightly different. I wanted to go with the version that has the 3x 5.25" bays populated with the hot swap trays already vs going with the model you have (and is in the video you posted) that would require me to buy the hotswap backplane and swap them out (probably another $400 in added cost too). But.... that version has the fan wall I could install my rad onto. Ugh.. I wish they would include the fan wall on the other version also.. Anyway. Not that I need that many drives, the sick drive cage you made almost makes me want to rethink going with the other version in case you decided to release the CAD files It's a pretty close replica of the 45drives enclosure that I was looking at after finding out that is what LTT was using for their Unraid servers.... Until I got the quote from them at ~$1,500 just for the bare case.
  7. The Rosewill 4u I am looking at only has a support bracket in the center, no fan wall behind the HDD bay. Is this the same as what you have and if so, how did you mount the rad in the case? I'm sure it will have plenty of space since I'm only running a ATX sized mobo so that wouldn't be a issue. The AMD 5700XT I think sticks out another 2 or 3 inches path the mobo, but even then I should have more then enough space to place the rad into the case before it hits the back of the HDD backplane / fans.
  8. I've been running Unraid as my primary desktop / server for over a year now and I really enjoy many aspects of it, however recently I upgraded and added a additional spinning rust to my lian li o11 dynamic enclosure and the drive temps are just not coming down even after adding all the fans that I could. Yeah, I know this is a horrible case for spinning rust drives, but when I started the build I never intended to even put mechanical drives in it at all, just nvme's and a few SSD's. I wanted it to be a modern desktop PC. lol... So at this point I have migrated the "array" part of my unraid build to a old build I had laying around (Corsair 500r, 8+ year old C2D CPU etc..). While the case is actually almost perfect for a Unraid build compared to my o11 dynamic and I even had a 5.25" to 3x 3.5" hot swap backplane that keeps the drives cool already. At the end of the day, the hardware is junk for trying to run VM's and dockers, a part of unraid I enjoy the most. As for the "desktop", I had this VM's nvme drive passed through, I was able to just pull the Unraid USB drive out and select the nvme drive as the new boot drive and it just booted right up with out any issues at all. Seriously gotta love that. As a result, I am now at a crossroads as to what to do and hence the reason I am posting here to see what you all think. Option 1: Upgrade the old build to a 5700x allowing me to now run VM's and dockers on the unraid server, but keeps my "desktop" as a traditional bare metal machine. But I feel kinda bad about this considering the desktop is WAY overkill in terms of CPU / other I/O (3950x, 32GB Ram, 2x Nvme's, 3x SSD's). Option 2: Don't upgrade the old machine, keep it as a spinning rust file server considering the hardware is find for that, but then unraid my other "desktop" machine. In this configuration however I would have no interest in the array part at all, but can you even start VM's and docker with out any disks in the array? I know you can add SSD's to a drive cache pool, but don't you still need disks populating the array? Its almost like I want a high speed nvme / ssd only version of unraid that focused on the virtualization part. Not sure unraid is even still the right fit anymore for that. Option 3: Thinking of picking up a Rosewill RSV-L4412U 4u case, then migrate everything back into a single machine. The only challenge here is that I have a 360mm watercooled AIO for the CPU that might be interesting trying to fit / fabricate a bracket for, but if successful would give me the spinning rust storage with proper cooling and the ability to run it on my newer hardware for VM / dockers.
  9. Background: I am planning a trip for the next several weeks and would like to take my workstation VM image with me and install it on a different machine during my travel. Considering my workstation is just a vdisk image and as far as I am aware, Unraid is just running qemu, I thought this would be easy. So I installed: qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils virtinst virt-manager Then took a copy of my vdisk1.img and attempted to boot it in virt-manager with no luck. The VM has a bios of i440fx-4.2 and virt-manager only has i440fx as a option, so that could be a issue I guess. I also suspect that EDK2 (tianocore) is also holding me back here. I see plenty of posts etc.. about trying to run Unraid in a VM, but have yet to find any documentation about how to run a VM created in unraid somewhere other then in unraid. I guess I could just buy another key and use that for 3 weeks while I am away from my server, but looking to avoid that since I will have little to no use for it long term.
  10. I knew I forgot something else. Funny thing is, I must have looked at it when I was trying to fix it, but it was set to auto so I figured it would not use SSL if it couldn't resolve.. Thank you!
  11. so I ended up rebooting and selected the webui mode on boot. Was able to get in and uninstall unraid.net app, however my server is still trying to resolve to url [pointer].unraid.net.
  12. I installed the unraid application and registered my server with unraid.net. After that, the only way to get to the server was via the url [pointer].unraid.net (going to the local IP no longer is allowed). I can still SSH of course, however since my internet decided to die this afternoon, I'm left with a somewhat interesting issue.. I need to pass through a USB wifi dongle to a VM, but can't since the ui is not accessible lol.. Any ideas here?
  13. I hope so. This big navi hardware reset bug makes me ready to switch to team green (assuming supply issues return to normal at some point). lol
  14. I think this post can be tagged as solved. I think based on everyone's support here I have everything running well now and is stable. Thank you everyone for your help as always!
  15. Wow. Yes, I completely misunderstood adding a new pool for adding the drive to the cache pool. Had I looked at the release thread, I would have realized that we can now add SSD's to a pool (not just the cache only pool). Makes complete sense why its raid0, so like you said, even though the 1TB SSD is in the cache pool, its restricted to the smallest member of that pool (the 240GB drive). It is somewhat misleading considering the WebUI shows total size of that cache pool to be 1.2TB when it should be shown as the actual usable 240G? Yes I know the parity is invalid, the rebuild to the new larger parity drive is still ongoing and I am not going to make any further changes until that process is completed. I need access to the VM's that got nuked in the btfs error'ed drive (that I can backup from the array disks), but I refuse to do anything until that rebuild is done. I do not have another SATA port on my mobo, so that is why I didn't just add the new parity drive while the other parity was still installed.