relink

Members
  • Posts

    235
  • Joined

  • Last visited

Posts posted by relink

  1. I appreciate the info, I'll keep it in mind though hopefully this doesn't happen again. Unfortunately I lost about 8TB worth of data because of this, no fault of Unraid, it was the HP SAS Expander I was using. In fact despite having 5 "failed" drives (which included both parity drives) Im really happy to see that a majority of my data is still intact thanks to Unraid! 

    Luckily my Intel SAS expander came in today, and it's so far so good! 

  2. There must be some way to clear the failed status, and just simply add the drive back to the array?

     

    I know the drives are fine, it's my SAS expander thats bad. I already have a new one on the way, but I am trying to keep Unraid up until it gets here. Normally the disks just show as missing, and I reboot a few times and I'm good to go. But now disks are randomly being marked as "Failed".

    Whenever they are marked as failed the only way to get them back into the array seems to be to remove them, start the array in maintenance mode, stop the array, re-add the disk, then re-build the data on that drive. This would be fine if it wasn't multiple times a day when I know there's nothing wrong with the disks anyway. 

  3. Just like with CUDA in docker containers or LXC, the host OS needs the proper drivers in order for the docker containers to function. 
     

    unfortunately ROCm just isn’t as popular as CUDA so it makes finding info more difficult. 
     

    I'm hoping to not have to use a VM for one or two apps when there are prebuilt docker images available with ROCm support. 

  4. Ok so this has been a roller coaster. 

    I rebooted again, Thankfully I could do it from the GUI Mode desktop this time. This time I booted into GUI Safe Mode and I was able to access the WebUI, but one of my drives was missing from the array. This has happened before but a reboot usually fixes it. 

     

    I reboot again but back into regular GUI Mode. This time everything loads up just like it should, and everything is working again...except now I'm missing 2 disks from my array. But they are Mounted and fully accessible...I don't understand that one at all. 

  5. My Unraid box has been running flawlessly for almost 2 years now. I did the latest Stable update a few days ago, rebooted and everything was fine. Today I noticed I couldn't access Plex and when I went to open the Unraid UI it wouldn't load, I tried to SSH in and that couldn't connect either. Unfortunately this left me needing to do a hard reboot, this time I booted into GUI mode. The desktop loaded, but the WebUI still wont load. So basically I have no way to access the Unraid UI at all. 

    Possibly related, My array took almost 10min to spin up which is not normal at all. Luckily I have a script that plays a tone when the array comes up or I wouldn't have known. 

  6. @ich777 ok I managed to figure out the issue. When I first setup the container I needed to add to my config,

    lxc.cgroup2.devices.allow = c 195:* rwm
    lxc.cgroup2.devices.allow = c 243:* rwm

     

    and I did verify at the time that 195 and 243 were correct. However I have re-created this container several times and tried different distros in-between and for whatever reason it has changed to 195 and 238...I didn't realize that could change. 

    But regardless after manually installing the nvidia driver, manually installing cuda and cuDNN It appears to be finally working!

     

    1292737651_Screenshot2023-06-28at12_38_13AM.png.258b4930311b920055d7ced9d718c70d.png

     

    Screenshot 2023-06-28 at 12.40.31 AM.png

    Screenshot 2023-06-28 at 12.42.52 AM.png

    Screenshot 2023-06-28 at 12.42.20 AM.png

    • Like 1
  7. 5 hours ago, ich777 said:

    Please look at the first recommended post on top from @juan11perez.

     

    It should be possible indeed.

     

    Please also share your Diagnostics so that I can see how everything is configured.


    That post was actually what inspired me to try LXC. 
     

    But I think I might have found part of my issue. In my excitement to get everything setup, it never dawned on me to create a user inside the container 😅 so I had done everything as root. 
     

    I ended up nuking that container last night since I also started having an unrelated issue with PostgreSQL too.
     

    Im going to start fresh. I will post back with how everything worked out. 

    • Like 1
  8. Is it possible to run CUDA in a LXC container? Having an issue and I'm unsure of where to start troubleshooting. 
     

    I have my Quadro P400 exposed to my Ubuntu 22.04 container and can see it from nvtop inside the container. 
     

    Driver in the container is the 535 branch, the exact same version that is installed in unraid.

     

    Inside the container I have installed CUDA 11.2 and cuDNN 8.1.0. Both seem to be installed fine. 
     

    The issue is the app I need the GPU for says that it has loaded all the libraries but that it cant load the GPU…

     

    I don’t know if its a permissions issue or what. 

     

    For those curios im trying to setup the Nextcloud app Recognize. 

  9. 7 hours ago, szaimen said:

    you cannot change it to not use volumes for data to be stored

    I didn't want to change that, I know I cant use a bind mount or anything like that, I already went down that rabbit hole. All I want is to store the named volume somewhere other than inside my docker.img file. 

     

    EDIT:
    So I was completely unaware that in Unraid 6.9 the ability to get rid of the Docker img file and use a directory instead was introduced. That completely solves this issue right there. I'll just screenshot what containers I'm currently running and migrate over to using a directory instead of a vDisk. So long as everything goes well, then problem solved. 

  10. The title pretty much says it all. I am trying to install the Nextcloud-AIO container from CA which insists on using a named volume to store pretty much everything except your personal files. From my understanding Unraid by default will store named volumes inside the docker.img which I absolutely do not want to do. 

    The Nextcloud AIO github has a "manual install" process that supposedly allows you to use a bind mount instead, however the manual install breaks nearly every feature that makes the AIO setup so nice in the first place. So that seems kind of pointless. 

    I have already searched the forums and everyone keeps sharing this link to a install guide;
    https://myunraid-ru.translate.goog/nextcloud-aio/?_x_tr_sl=auto&_x_tr_tl=en&_x_tr_hl=de&_x_tr_pto=wapp
    However, nowhere in this guide is this issue addressed. 

  11. Ok, so this is definitely a KVM/Host issue. I decided to scrap Ubuntu entirely and installed Arch. I did nothing in Arch aside from setup user accounts, networking, time zone, and other basic stuff. I didn't even mount the share, it was in the VM config, but not mounted in the guest. Arch was literally just sitting there doing absolutely nothing and this issue still happened. 

  12. I have no idea whats causing this so I'll try and provide as much information as possible.

     

    I have a VM running Ubuntu Server 22.04, it seems to run just fine for hours on end, but not one day has gone by where I went to sleep or to work only to find the cores assigned to the VM at 100%, and the VM totally unresponsive.

     

    Unraid Version 6.11.5

    AMD Ryzen 5 2600

    80GB DDR4

    Diagnostics Attached (Taken while the VM was locked up)

     

    VM Info:

    OS: Ubuntu Server 22.04

    CPU: 3C/6T Host Passthrough

    RAM: 8GB

    GPU:

    1. VNC
    2. Nvidia Quadro P400 (Passed through with it's audio controller)

    Storage:

    1. 40GB Virtio Vdisk qcow (On nvme cache)
    2. Virtiofs mounted directory on a 2TB unassigned SSD.

     

    VM Use:

    The VM only runs Nextcloud 25.0.2 & NGINX. PostgreSQL and Redis are both running as docker containers on unraid. The Virtiofs storage is set as the Nextcloud data directory.

     

    Aside from SSH & the Nvidia drivers there is nothing else running on this VM that isn't part of the standard Ubuntu Server installation.

     

    I have completely formatted and re-installed the guest OS 4 times and this issue still happens. I'm really not sure why...

     

     

    VM XML:

    Spoiler

    <?xml version='1.0' encoding='UTF-8'?>

    <domain type='kvm' id='2'>

    <name>NC-Ubuntu</name>

    <uuid>2cce725e-4a5b-3f14-a49d-df77d39f892f</uuid>

    <metadata>

    <vmtemplate xmlns="unraid" name="Ubuntu" icon="ubuntu.png" os="ubuntu"/>

    </metadata>

    <memory unit='KiB'>8388608</memory>

    <currentMemory unit='KiB'>8388608</currentMemory>

    <memoryBacking>

    <source type='memfd'/>

    <access mode='shared'/>

    </memoryBacking>

    <vcpu placement='static'>6</vcpu>

    <cputune>

    <vcpupin vcpu='0' cpuset='0'/>

    <vcpupin vcpu='1' cpuset='6'/>

    <vcpupin vcpu='2' cpuset='2'/>

    <vcpupin vcpu='3' cpuset='8'/>

    <vcpupin vcpu='4' cpuset='4'/>

    <vcpupin vcpu='5' cpuset='10'/>

    </cputune>

    <resource>

    <partition>/machine</partition>

    </resource>

    <os>

    <type arch='x86_64' machine='pc-q35-7.1'>hvm</type>

    <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>

    <nvram>/etc/libvirt/qemu/nvram/2cce725e-4a5b-3f14-a49d-df77d39f892f_VARS-pure-efi.fd</nvram>

    </os>

    <features>

    <acpi/>

    <apic/>

    </features>

    <cpu mode='host-passthrough' check='none' migratable='on'>

    <topology sockets='1' dies='1' cores='3' threads='2'/>

    <cache mode='passthrough'/>

    <feature policy='require' name='topoext'/>

    </cpu>

    <clock offset='utc'>

    <timer name='rtc' tickpolicy='catchup'/>

    <timer name='pit' tickpolicy='delay'/>

    <timer name='hpet' present='no'/>

    </clock>

    <on_poweroff>destroy</on_poweroff>

    <on_reboot>restart</on_reboot>

    <on_crash>restart</on_crash>

    <devices>

    <emulator>/usr/local/sbin/qemu</emulator>

    <disk type='file' device='disk'>

    <driver name='qemu' type='qcow2' cache='writeback'/>

    <source file='/mnt/cache-nvme/domains/NC-Ubuntu/vdisk1.img' index='2'/>

    <backingStore/>

    <target dev='hdc' bus='virtio'/>

    <boot order='2'/>

    <alias name='virtio-disk2'/>

    <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>

    </disk>

    <disk type='file' device='cdrom'>

    <driver name='qemu' type='raw'/>

    <source file='/mnt/cache-nvme/isos/ubuntu-22.04.1-live-server-amd64.iso' index='1'/>

    <backingStore/>

    <target dev='hda' bus='sata' tray='open'/>

    <readonly/>

    <boot order='1'/>

    <alias name='sata0-0-0'/>

    <address type='drive' controller='0' bus='0' target='0' unit='0'/>

    </disk>

    <controller type='usb' index='0' model='ich9-ehci1'>

    <alias name='usb'/>

    <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>

    </controller>

    <controller type='usb' index='0' model='ich9-uhci1'>

    <alias name='usb'/>

    <master startport='0'/>

    <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>

    </controller>

    <controller type='usb' index='0' model='ich9-uhci2'>

    <alias name='usb'/>

    <master startport='2'/>

    <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>

    </controller>

    <controller type='usb' index='0' model='ich9-uhci3'>

    <alias name='usb'/>

    <master startport='4'/>

    <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>

    </controller>

    <controller type='pci' index='4' model='pcie-root-port'>

    <model name='pcie-root-port'/>

    <target chassis='4' port='0x13'/>

    <alias name='pci.4'/>

    <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>

    </controller>

    <controller type='pci' index='5' model='pcie-root-port'>

    <model name='pcie-root-port'/>

    <target chassis='5' port='0x14'/>

    <alias name='pci.5'/>

    <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>

    </controller>

    <controller type='pci' index='6' model='pcie-root-port'>

    <model name='pcie-root-port'/>

    <target chassis='6' port='0x15'/>

    <alias name='pci.6'/>

    <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>

    </controller>

    <controller type='pci' index='7' model='pcie-root-port'>

    <model name='pcie-root-port'/>

    <target chassis='7' port='0x16'/>

    <alias name='pci.7'/>

    <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>

    </controller>

    <controller type='virtio-serial' index='0'>

    <alias name='virtio-serial0'/>

    <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>

    </controller>

    <controller type='sata' index='0'>

    <alias name='ide'/>

    <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>

    </controller>

    <controller type='pci' index='0' model='pcie-root'>

    <alias name='pcie.0'/>

    </controller>

    <controller type='pci' index='1' model='pcie-root-port'>

    <model name='pcie-root-port'/>

    <target chassis='1' port='0x10'/>

    <alias name='pci.1'/>

    <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>

    </controller>

    <controller type='pci' index='2' model='pcie-root-port'>

    <model name='pcie-root-port'/>

    <target chassis='2' port='0x11'/>

    <alias name='pci.2'/>

    <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>

    </controller>

    <controller type='pci' index='3' model='pcie-root-port'>

    <model name='pcie-root-port'/>

    <target chassis='3' port='0x12'/>

    <alias name='pci.3'/>

    <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>

    </controller>

    <filesystem type='mount' accessmode='passthrough'>

    <driver type='virtiofs' queue='1024'/>

    <binary path='/usr/libexec/virtiofsd' xattr='on'>

    <cache mode='always'/>

    <sandbox mode='chroot'/>

    <lock posix='on' flock='on'/>

    </binary>

    <source dir='/mnt/disks/ncdata-cache'/>

    <target dir='ncdata'/>

    <alias name='fs0'/>

    <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>

    </filesystem>

    <interface type='bridge'>

    <mac address='52:54:00:d6:34:6b'/>

    <source bridge='br0'/>

    <target dev='vnet1'/>

    <model type='virtio-net'/>

    <alias name='net0'/>

    <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>

    </interface>

    <serial type='pty'>

    <source path='/dev/pts/0'/>

    <target type='isa-serial' port='0'>

    <model name='isa-serial'/>

    </target>

    <alias name='serial0'/>

    </serial>

    <console type='pty' tty='/dev/pts/0'>

    <source path='/dev/pts/0'/>

    <target type='serial' port='0'/>

    <alias name='serial0'/>

    </console>

    <channel type='unix'>

    <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-2-NC-Ubuntu/org.qemu.guest_agent.0'/>

    <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/>

    <alias name='channel0'/>

    <address type='virtio-serial' controller='0' bus='0' port='1'/>

    </channel>

    <input type='tablet' bus='usb'>

    <alias name='input0'/>

    <address type='usb' bus='0' port='1'/>

    </input>

    <input type='mouse' bus='ps2'>

    <alias name='input1'/>

    </input>

    <input type='keyboard' bus='ps2'>

    <alias name='input2'/>

    </input>

    <graphics type='vnc' port='5900' autoport='yes' websocket='5700' listen='0.0.0.0' keymap='en-us'>

    <listen type='address' address='0.0.0.0'/>

    </graphics>

    <audio id='1' type='none'/>

    <video>

    <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>

    <alias name='video0'/>

    <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>

    </video>

    <hostdev mode='subsystem' type='pci' managed='yes'>

    <driver name='vfio'/>

    <source>

    <address domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>

    </source>

    <alias name='hostdev0'/>

    <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>

    </hostdev>

    <hostdev mode='subsystem' type='pci' managed='yes'>

    <driver name='vfio'/>

    <source>

    <address domain='0x0000' bus='0x08' slot='0x00' function='0x1'/>

    </source>

    <alias name='hostdev1'/>

    <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>

    </hostdev>

    <memballoon model='none'/>

    </devices>

    <seclabel type='dynamic' model='dac' relabel='yes'>

    <label>+0:+100</label>

    <imagelabel>+0:+100</imagelabel>

    </seclabel>

    </domain>

     

     

     

     

     

    serverus-diagnostics-20221227-1719.zip

  13. 2 hours ago, Kilrah said:

    ffmpeg on debian not being built with nvenc support either.

    I didnt know that, but now that I think about it Debian is one of the 100% Foss distros so that makes sense.

     

    2 hours ago, Kilrah said:

    Looks like what you were answered in  https://github.com/nextcloud/all-in-one/discussions/1525

    Thanks for pointing that out, I actually hadn't seen the latest reply yet.

     

    So unless I'm missing something, it looks like my only option is a VM if I want HW transcoding, especially nvenc. I cant think of a single Docker image I haven't tried in the last 2 weeks.