relink

Members
  • Posts

    235
  • Joined

  • Last visited

1 Follower

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

relink's Achievements

Explorer

Explorer (4/14)

8

Reputation

1

Community Answers

  1. I appreciate the info, I'll keep it in mind though hopefully this doesn't happen again. Unfortunately I lost about 8TB worth of data because of this, no fault of Unraid, it was the HP SAS Expander I was using. In fact despite having 5 "failed" drives (which included both parity drives) Im really happy to see that a majority of my data is still intact thanks to Unraid! Luckily my Intel SAS expander came in today, and it's so far so good!
  2. There must be some way to clear the failed status, and just simply add the drive back to the array? I know the drives are fine, it's my SAS expander thats bad. I already have a new one on the way, but I am trying to keep Unraid up until it gets here. Normally the disks just show as missing, and I reboot a few times and I'm good to go. But now disks are randomly being marked as "Failed". Whenever they are marked as failed the only way to get them back into the array seems to be to remove them, start the array in maintenance mode, stop the array, re-add the disk, then re-build the data on that drive. This would be fine if it wasn't multiple times a day when I know there's nothing wrong with the disks anyway.
  3. Just like with CUDA in docker containers or LXC, the host OS needs the proper drivers in order for the docker containers to function. unfortunately ROCm just isn’t as popular as CUDA so it makes finding info more difficult. I'm hoping to not have to use a VM for one or two apps when there are prebuilt docker images available with ROCm support.
  4. I really wanted to give it a try but the link is broken.
  5. Ok so this has been a roller coaster. I rebooted again, Thankfully I could do it from the GUI Mode desktop this time. This time I booted into GUI Safe Mode and I was able to access the WebUI, but one of my drives was missing from the array. This has happened before but a reboot usually fixes it. I reboot again but back into regular GUI Mode. This time everything loads up just like it should, and everything is working again...except now I'm missing 2 disks from my array. But they are Mounted and fully accessible...I don't understand that one at all.
  6. My Unraid box has been running flawlessly for almost 2 years now. I did the latest Stable update a few days ago, rebooted and everything was fine. Today I noticed I couldn't access Plex and when I went to open the Unraid UI it wouldn't load, I tried to SSH in and that couldn't connect either. Unfortunately this left me needing to do a hard reboot, this time I booted into GUI mode. The desktop loaded, but the WebUI still wont load. So basically I have no way to access the Unraid UI at all. Possibly related, My array took almost 10min to spin up which is not normal at all. Luckily I have a script that plays a tone when the array comes up or I wouldn't have known.
  7. @ich777 ok I managed to figure out the issue. When I first setup the container I needed to add to my config, lxc.cgroup2.devices.allow = c 195:* rwm lxc.cgroup2.devices.allow = c 243:* rwm and I did verify at the time that 195 and 243 were correct. However I have re-created this container several times and tried different distros in-between and for whatever reason it has changed to 195 and 238...I didn't realize that could change. But regardless after manually installing the nvidia driver, manually installing cuda and cuDNN It appears to be finally working!
  8. That post was actually what inspired me to try LXC. But I think I might have found part of my issue. In my excitement to get everything setup, it never dawned on me to create a user inside the container 😅 so I had done everything as root. I ended up nuking that container last night since I also started having an unrelated issue with PostgreSQL too. Im going to start fresh. I will post back with how everything worked out.
  9. Is it possible to run CUDA in a LXC container? Having an issue and I'm unsure of where to start troubleshooting. I have my Quadro P400 exposed to my Ubuntu 22.04 container and can see it from nvtop inside the container. Driver in the container is the 535 branch, the exact same version that is installed in unraid. Inside the container I have installed CUDA 11.2 and cuDNN 8.1.0. Both seem to be installed fine. The issue is the app I need the GPU for says that it has loaded all the libraries but that it cant load the GPU… I don’t know if its a permissions issue or what. For those curios im trying to setup the Nextcloud app Recognize.
  10. I didn't want to change that, I know I cant use a bind mount or anything like that, I already went down that rabbit hole. All I want is to store the named volume somewhere other than inside my docker.img file. EDIT: So I was completely unaware that in Unraid 6.9 the ability to get rid of the Docker img file and use a directory instead was introduced. That completely solves this issue right there. I'll just screenshot what containers I'm currently running and migrate over to using a directory instead of a vDisk. So long as everything goes well, then problem solved.
  11. The title pretty much says it all. I am trying to install the Nextcloud-AIO container from CA which insists on using a named volume to store pretty much everything except your personal files. From my understanding Unraid by default will store named volumes inside the docker.img which I absolutely do not want to do. The Nextcloud AIO github has a "manual install" process that supposedly allows you to use a bind mount instead, however the manual install breaks nearly every feature that makes the AIO setup so nice in the first place. So that seems kind of pointless. I have already searched the forums and everyone keeps sharing this link to a install guide; https://myunraid-ru.translate.goog/nextcloud-aio/?_x_tr_sl=auto&_x_tr_tl=en&_x_tr_hl=de&_x_tr_pto=wapp However, nowhere in this guide is this issue addressed.
  12. That definitely seemed to have been the culprit. I have been running on and off for over 2 days and that issue hasn't happened again since.
  13. Ok, im off to try again. Thank you for your help. *fingers Crossed*
  14. <memoryBacking> <source type='memfd'/> <access mode='shared'/> </memoryBacking> In the context of the earlier comment I assumed this went along with the Unraid share setting. Should I ensure this is changed to `<nosharepages/>` regardless of what the share setting is?
  15. Ok, so this is definitely a KVM/Host issue. I decided to scrap Ubuntu entirely and installed Arch. I did nothing in Arch aside from setup user accounts, networking, time zone, and other basic stuff. I didn't even mount the share, it was in the VM config, but not mounted in the guest. Arch was literally just sitting there doing absolutely nothing and this issue still happened.