Jump to content

cooljules1

Members
  • Posts

    13
  • Joined

  • Last visited

Everything posted by cooljules1

  1. Hello, I'm trying to deploy a ROON server on an Ubuntu VM, which works just fine. But when i add an unRAID share, the network is down. I know there are a couple of threads on this on this forum, a few years old, but i can't seem to solve it with the solutions proposed there. Here's my network before adding unRAID shares: After adding unRAID shares: I'm running Ubuntu 24.04 Any help is greatly appreciated. <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' id='9'> <name>Roon Core Ubuntu</name> <uuid>59f593dc-3d20-5417-7081-62901ff6643c</uuid> <metadata> <vmtemplate xmlns="unraid" name="Ubuntu" icon="ubuntu.png" os="ubuntu"/> </metadata> <memory unit='KiB'>30932992</memory> <currentMemory unit='KiB'>15728640</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='1'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='3'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-q35-7.2'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/59f593dc-3d20-5417-7081-62901ff6643c_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='2' threads='2'/> <cache mode='passthrough'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Roon Core Ubuntu/vdisk1.img' index='2'/> <backingStore/> <target dev='hdc' bus='virtio'/> <serial>vdisk1</serial> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/ubuntu-24.04-live-server-amd64.iso' index='1'/> <backingStore/> <target dev='hda' bus='sata'/> <readonly/> <boot order='2'/> <alias name='sata0-0-0'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <controller type='pci' index='0' model='pcie-root'> <alias name='pcie.0'/> </controller> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x10'/> <alias name='pci.1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x11'/> <alias name='pci.2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0x12'/> <alias name='pci.3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x13'/> <alias name='pci.4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0x14'/> <alias name='pci.5'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0x15'/> <alias name='pci.6'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='sata' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='usb' index='0' model='ich9-ehci1'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <alias name='usb'/> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <alias name='usb'/> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <alias name='usb'/> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <filesystem type='mount' accessmode='passthrough'> <source dir='/mnt/user/media/music/'/> <target dir='music'/> <alias name='fs0'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </filesystem> <interface type='bridge'> <mac address='52:54:00:ea:28:2a'/> <source bridge='br0'/> <target dev='vnet8'/> <model type='virtio-net'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/6'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/6'> <source path='/dev/pts/6'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-9-Roon Core Ubuntu/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <channel type='qemu-vdagent'> <source> <clipboard copypaste='yes'/> <mouse mode='client'/> </source> <target type='virtio' name='com.redhat.spice.0' state='disconnected'/> <alias name='channel1'/> <address type='virtio-serial' controller='0' bus='0' port='2'/> </channel> <input type='tablet' bus='usb'> <alias name='input0'/> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'> <alias name='input1'/> </input> <input type='keyboard' bus='ps2'> <alias name='input2'/> </input> <graphics type='vnc' port='5906' autoport='yes' websocket='5706' listen='0.0.0.0' keymap='da'> <listen type='address' address='0.0.0.0'/> </graphics> <audio id='1' type='none'/> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <alias name='video0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </memballoon> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain> tower-diagnostics-20240503-1230.zip
  2. I've had issues with my docker build becoming corrupt, without any action from my side. It would always work when i reinstalled it, and then end up not working. I recently upgraded from 6.12.6 to 6.12.10 and now it seems to work, which i find quite strange.. Anyways, wanted to share. I'm just happy it's finally stable for me.
  3. Thank you! This was indeed the solution!
  4. Yes, sorry for not including that. My board is an ASUS Pro WS W680-ACE. I could only boot if i renamed the EFI folder on the USB drive initially. It boots in legacy mode. The other machine has not renamed the EFI folder and boots in UEFI mode. I'm a bit puzzled, as i can't see why theres a difference between manual boot override, and the normally configured boot priority? Thank you.
  5. Hi, I've now set up two unRAID servers, which exhibits the same bahaviour. The USB stick is the default boot priority, but the system just boots to a blank/black screen. I have to enter bios and manually boot from the USB drive every time, which is rather annoying to say the least. Does anyone have a tip as to what's going on here? Thanks, JH
  6. Hello, My Main page is suddenly blank, and general performance is buggy. I've tried downloading diagnostics, but the page just hangs on "Downloading...." Any tips as to what's wrong?
  7. Disks mounted, array started! Thank you a mil! You saved my day.
  8. As a reference, this is how it looked prior to booting: pool: main state: ONLINE config: NAME STATE READ WRITE CKSUM main ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 /dev/sdf1 ONLINE 0 0 0 /dev/sdg1 ONLINE 0 0 0 /dev/sdh1 ONLINE 0 0 0 /dev/sdi1 ONLINE 0 0 0 logs /dev/nvme1n1p1 ONLINE 0 0 0 cache /dev/nvme0n1p1 ONLINE 0 0 0 errors: No known data errors
  9. Thanks. It's not online. root@Tower:~# zpool export main cannot open 'main': no such pool We agree that the vdevs should remain as unassigned devices, and not added to a pool?
  10. Downloaded two diags, one before starting array and one after. Not sure if it makes a difference. Thank you! post_start_tower-diagnostics-20240208-1539.zip pre_start_tower-diagnostics-20240208-1538.zip
  11. Thank you a million! This helps alot, but i'm still not able to mount the pool. Just to re-iterate. The procedure is: unassign all pool devices start array (check the "Yes I want to do this" box) stop array re-assign all pool devices, including the new vdev(s), assign all devices sequentially in the same order as zpool status shows, and don't leave empty slots in the middle of the assigned devices. start array existing pool will be imported with the new vdev(s): I can now start the array just fine, with and without the disks assigned, as described above. But the disks are "Unmountable". I'm sure my mistake is when re-aassigning all pool devices, but I'm struggling to understand how i assign the new vdevs (cache and logs) to the pool? I mean, i can't add them in CLI when the array is stopped? And i can't when it's started either, because it's unmountable? Thanks again!
  12. I'm fairly new to unRAID, but very happy to have made the switch. I've made a few mistakes already, and have learned greatly from them. I hope this isn't another one of those. So i've chosen to use RAIDz1 for my main storage pool (main), with a read and write M.2 SSD cache as well (Dev1 - nvme0n1 + Dev3 - nvme1n1). I think this is my first reboot. Everything was working gloriously, but now i can't bring my array online. I can't mount the two nvme's either - they used to have the status 0dev i think. So i'm thinking the issue here is that the RAIDz1 pool expects the two cache drives to be accessible, which for some reason they arent. I'd really appreciate some input on this. Thank you! tower-diagnostics-20240207-2050.zip
×
×
  • Create New...