relink

Members
  • Posts

    235
  • Joined

  • Last visited

Everything posted by relink

  1. I appreciate the info, I'll keep it in mind though hopefully this doesn't happen again. Unfortunately I lost about 8TB worth of data because of this, no fault of Unraid, it was the HP SAS Expander I was using. In fact despite having 5 "failed" drives (which included both parity drives) Im really happy to see that a majority of my data is still intact thanks to Unraid! Luckily my Intel SAS expander came in today, and it's so far so good!
  2. There must be some way to clear the failed status, and just simply add the drive back to the array? I know the drives are fine, it's my SAS expander thats bad. I already have a new one on the way, but I am trying to keep Unraid up until it gets here. Normally the disks just show as missing, and I reboot a few times and I'm good to go. But now disks are randomly being marked as "Failed". Whenever they are marked as failed the only way to get them back into the array seems to be to remove them, start the array in maintenance mode, stop the array, re-add the disk, then re-build the data on that drive. This would be fine if it wasn't multiple times a day when I know there's nothing wrong with the disks anyway.
  3. Just like with CUDA in docker containers or LXC, the host OS needs the proper drivers in order for the docker containers to function. unfortunately ROCm just isn’t as popular as CUDA so it makes finding info more difficult. I'm hoping to not have to use a VM for one or two apps when there are prebuilt docker images available with ROCm support.
  4. I really wanted to give it a try but the link is broken.
  5. Ok so this has been a roller coaster. I rebooted again, Thankfully I could do it from the GUI Mode desktop this time. This time I booted into GUI Safe Mode and I was able to access the WebUI, but one of my drives was missing from the array. This has happened before but a reboot usually fixes it. I reboot again but back into regular GUI Mode. This time everything loads up just like it should, and everything is working again...except now I'm missing 2 disks from my array. But they are Mounted and fully accessible...I don't understand that one at all.
  6. My Unraid box has been running flawlessly for almost 2 years now. I did the latest Stable update a few days ago, rebooted and everything was fine. Today I noticed I couldn't access Plex and when I went to open the Unraid UI it wouldn't load, I tried to SSH in and that couldn't connect either. Unfortunately this left me needing to do a hard reboot, this time I booted into GUI mode. The desktop loaded, but the WebUI still wont load. So basically I have no way to access the Unraid UI at all. Possibly related, My array took almost 10min to spin up which is not normal at all. Luckily I have a script that plays a tone when the array comes up or I wouldn't have known.
  7. @ich777 ok I managed to figure out the issue. When I first setup the container I needed to add to my config, lxc.cgroup2.devices.allow = c 195:* rwm lxc.cgroup2.devices.allow = c 243:* rwm and I did verify at the time that 195 and 243 were correct. However I have re-created this container several times and tried different distros in-between and for whatever reason it has changed to 195 and 238...I didn't realize that could change. But regardless after manually installing the nvidia driver, manually installing cuda and cuDNN It appears to be finally working!
  8. That post was actually what inspired me to try LXC. But I think I might have found part of my issue. In my excitement to get everything setup, it never dawned on me to create a user inside the container 😅 so I had done everything as root. I ended up nuking that container last night since I also started having an unrelated issue with PostgreSQL too. Im going to start fresh. I will post back with how everything worked out.
  9. Is it possible to run CUDA in a LXC container? Having an issue and I'm unsure of where to start troubleshooting. I have my Quadro P400 exposed to my Ubuntu 22.04 container and can see it from nvtop inside the container. Driver in the container is the 535 branch, the exact same version that is installed in unraid. Inside the container I have installed CUDA 11.2 and cuDNN 8.1.0. Both seem to be installed fine. The issue is the app I need the GPU for says that it has loaded all the libraries but that it cant load the GPU… I don’t know if its a permissions issue or what. For those curios im trying to setup the Nextcloud app Recognize.
  10. I didn't want to change that, I know I cant use a bind mount or anything like that, I already went down that rabbit hole. All I want is to store the named volume somewhere other than inside my docker.img file. EDIT: So I was completely unaware that in Unraid 6.9 the ability to get rid of the Docker img file and use a directory instead was introduced. That completely solves this issue right there. I'll just screenshot what containers I'm currently running and migrate over to using a directory instead of a vDisk. So long as everything goes well, then problem solved.
  11. The title pretty much says it all. I am trying to install the Nextcloud-AIO container from CA which insists on using a named volume to store pretty much everything except your personal files. From my understanding Unraid by default will store named volumes inside the docker.img which I absolutely do not want to do. The Nextcloud AIO github has a "manual install" process that supposedly allows you to use a bind mount instead, however the manual install breaks nearly every feature that makes the AIO setup so nice in the first place. So that seems kind of pointless. I have already searched the forums and everyone keeps sharing this link to a install guide; https://myunraid-ru.translate.goog/nextcloud-aio/?_x_tr_sl=auto&_x_tr_tl=en&_x_tr_hl=de&_x_tr_pto=wapp However, nowhere in this guide is this issue addressed.
  12. That definitely seemed to have been the culprit. I have been running on and off for over 2 days and that issue hasn't happened again since.
  13. Ok, im off to try again. Thank you for your help. *fingers Crossed*
  14. <memoryBacking> <source type='memfd'/> <access mode='shared'/> </memoryBacking> In the context of the earlier comment I assumed this went along with the Unraid share setting. Should I ensure this is changed to `<nosharepages/>` regardless of what the share setting is?
  15. Ok, so this is definitely a KVM/Host issue. I decided to scrap Ubuntu entirely and installed Arch. I did nothing in Arch aside from setup user accounts, networking, time zone, and other basic stuff. I didn't even mount the share, it was in the VM config, but not mounted in the guest. Arch was literally just sitting there doing absolutely nothing and this issue still happened.
  16. Damn, no go. I’ts still crashing, this time it was only running for about an hour. Completely reinstalled Ubuntu again, this time I changed the mount from virtiofs to 9p. I also took the opportunity to expand my vdisk and switched to RAW instead of qcow. I have no idea what’s going on
  17. Ahhh, ok. I did it through the GUI so I never saw that part. I’ll give it a shot as soon as I can get to my computer and report back. on a side note, isn’t 9p mode significantly slower?
  18. I can certainly try it, can’t hurt. I’m not sure what your talking about here. All I did was add the share in the GUI when creating the VM and then added it to /etc/fstab. Maybe I missed an important step or option?
  19. Anyone? Even any suggestions on what I could check? Im still dealing with this. And it actually seems to be less than 24 hours, maybe 12 or less.
  20. I have no idea whats causing this so I'll try and provide as much information as possible. I have a VM running Ubuntu Server 22.04, it seems to run just fine for hours on end, but not one day has gone by where I went to sleep or to work only to find the cores assigned to the VM at 100%, and the VM totally unresponsive. Unraid Version 6.11.5 AMD Ryzen 5 2600 80GB DDR4 Diagnostics Attached (Taken while the VM was locked up) VM Info: OS: Ubuntu Server 22.04 CPU: 3C/6T Host Passthrough RAM: 8GB GPU: VNC Nvidia Quadro P400 (Passed through with it's audio controller) Storage: 40GB Virtio Vdisk qcow (On nvme cache) Virtiofs mounted directory on a 2TB unassigned SSD. VM Use: The VM only runs Nextcloud 25.0.2 & NGINX. PostgreSQL and Redis are both running as docker containers on unraid. The Virtiofs storage is set as the Nextcloud data directory. Aside from SSH & the Nvidia drivers there is nothing else running on this VM that isn't part of the standard Ubuntu Server installation. I have completely formatted and re-installed the guest OS 4 times and this issue still happens. I'm really not sure why... VM XML: serverus-diagnostics-20221227-1719.zip
  21. I didnt know that, but now that I think about it Debian is one of the 100% Foss distros so that makes sense. Thanks for pointing that out, I actually hadn't seen the latest reply yet. So unless I'm missing something, it looks like my only option is a VM if I want HW transcoding, especially nvenc. I cant think of a single Docker image I haven't tried in the last 2 weeks.
  22. I'm surprised I didn't see this mentioned yet, but is it possible to expose a GPU to this container? I'm desperately trying to find a way to expose my Quadro P400 to Nextcloud so I can enable hardware trasncoding in Memories.
  23. Have you had any luck with this? I have been trying to use rclone mount with vfs_cache to do something similar and having no luck at all.
  24. Did you ever get Untangle working in KVM? I was literally just talking about doing this exact same thing this weekend. Sorry, I know this is an old post but what are the odds of running across a post about exactly what I was getting ready to do.