Jump to content

Trunkton

Members
  • Content Count

    45
  • Joined

  • Last visited

Everything posted by Trunkton

  1. Its reading stuff like this that adds fuel to my itch to burn money up on a Quattro to play with the new NVIDIA UNRAID support even though there is just 3 users max running plex. [emoji28] So plex, now jellyfin, possibly a minecraft docker and who knows what else.
  2. Another vote for the new CA! It is quite a luxury to have. Thank you. Running 6.7 rc3
  3. Thanks for this very excellent template! Under Settings > Dashboard > Server > Transcoding the ffmpeg path is blank, should I be concerned about that or is it not a big deal anyway?
  4. Thanks for the detailed walkthrough! I see that you're using the SSH Config tool, I noticed I can disable root login and use my same user setup for accessing shares with this plugin. It activates and adds the user to sudo (not so without the plugin) so I can escalate into root as necessary. I forget if I had to manually edit the sudoers file or create one but needless to say I prefer not having a root user to even be able to login on the system.
  5. I was pretty excited to give this a try depending how the thread was going. Owning a Synology the file manager and app UI is a first-class experience. Good news is this project is on github, and anyone can fork that and make modifications to improve upon it. I see one user has out of the few forks there.
  6. Squid! Thanks for an amazing app, I'm using it on all my new docker containers which sit outside of the array (but mounts encrypted with the array) and absolutely appreciate all the service and support you are doing.
  7. Thanks for this plugin! I shut down my array, identified things and realized my soon to be parity drives are occupying bay 1 and 4 instead of 1 and 2 lol. Just a suggestion, if you were to put this repo on github I wonder if others would collaborate or fork to make their own improvements on this great app then merge them back in for a better experience?
  8. THANKS!! I will look at switching out details in my /boot/config/go file to that right now, I didn't know it was out there very useful. UPDATE: That plugin does not do things pre-mount just after the array mounts or on a schedule from what I can see.
  9. Squid, you're everywhere! Thanks for the info appreciate it. I installed the SSH Plugin by docgyver and now my normal user can SSH in. It is not part of the sudoers list so I'm working on changing that then I'm thinking Dockers will get run with sudo command (effectively as root anyway) and I'll lock out root from SSH via GUI and take a backup first.
  10. Hi Henning, Check this out, it sounds like the same issue that you're having: Am not poster here but I found it informative and ended up doing the same. To mitigate this issue on my end what I ended up doing was building the array with cache drive SSD then going new config and removing the SSD setting all options on all shares beforehand to use cache=no and run mover before going new config. Now, Docker and VM's run completely independent of the array to preserve performance. If/when I elect to use a cache drive it will be spinning rust not an SSD to only be used as a cache. Hope that helps!
  11. Good day! I was wondering if anyone's had any success with SSH enabling then granting sudo to their secondary users on unraid? I'd like to run my docker compose script which fires up a bunch of things to run without having the entire system running under roots credentials. unraid is so exciting! Why I'd like to do this is because I'd really prefer if the directories inside of /config didn't end up getting owned and grouped to root who shouldn't have to be running that side of the show at least thats my thoughts. Just today, I got a UPS dashboard notification that power went out, UPS was running on batts and then it came back 2-3m later (I found out 2h after the fact). Now, that is something I don't get rolling my own solution in Ubuntu Server (in addition to many other things).
  12. Hey BLKMGK, Thanks for commenting I see where you're coming from. Two reasons: -I get control of how that data is disseminated in case I choose to change it, something I do not get without encryption. Now I have the flexibility to put the password somewhere online, internal on the network or secondary USB called "Encryption Key" (Label maker n all). -When a drive gets pulled for RMA, recycled, gifted or resold, I sleep a lot better knowing things are not wide open searchable. I have a hell of a time with Dockers working on unRAID by having to manually do the unlock. The answer I suppose is prevent the Dockers from auto-start but then why am I trying to automate anything if I have to jump on the box to do everything after a power outage event for example (which do happen but the UPS is there to prevent the worst). If I'm away from the house, someone powers on the box I don't wanna have to spend 30 minutes with someone unfamiliar doing support to get content I may possibly want remote access to, back.
  13. I hate that I never tried this before wiping my array >_< Auto-mount works. I am gonna copy things back in, restore the symlink and try my containers out again. Only way this can all go sideways now is if after all that the docker containers attempt to boot before the array does or something which I didn't plan for.
  14. THANK YOU!!! I am adapting this to simply: # auto unlock array cp -f /boot/config/keyfile /root/keyfile I copied the mounted keyfile under /root out to /boot/config/keyfile and placed this as first thing in go script. Hope it works, rebooting now. My reasons for not having to use some other devices are simple, I still want encryption and automatic mounting without wrecking my docker containers on my encrypted and unassigned SSD and if I ever want, can just get a USB drive out and plug it into unraid as "Encryption key" or something if I need that level of safety. UPDATE: It worked!
  15. UPDATE: I rebooted and tried my key again, it works! I am not sure what happened, one attempt with my invalid key then it won't take the good key until you reboot. Just glad it works and I don't have to hold for support. Emailed to close the ticket.
  16. It will be Pro after the licensing issues get fixed, on trial at the moment.
  17. It had been moved to the unassigned SSD which I had to wipe, there were no copies left unfortunately.
  18. Docker.img and the VM files first seeded to the array were moved manually to unassigned SSD (instead of copied) which meant even with my wiped array I couldn't use Docker because that file wasn't present in the array or SSD any longer. I googled for the file to see if there was mention of it on the forums here with nothing, only way that I know to get it back is to redo things from scratch. I power the system down, wipe the USB, redo it and get an error from the keyfile that my GUID isn't recognized anymore. I'm not sure if wiping it changed it or what but I will say this. I attempted to activate with the wrong key at first, but tried my current one (which I forgot I had) and same problem with activation. Now the license issue is in the hands of support for further action which I trust will be resolved it just sucks I have to wait it out now.
  19. A little bit about my setup before starting, used to be FreeNAS then I started using Ubuntu Server + Docker (ZFS) and that was great but I wanted a GUI to tie it all together more elegantly which unraid does. Backed up to an external device, wiped ZFS share, redid it to unraid and last night after 3 days finally got to play with Docker since my media has been restored. I didn't use the GUI Dockers, just docker-compose and a bit of editing to match new mount paths. Before I say this next part, I'll tell you right now I have a bad experience with keyfiles. I saved a lot of sensitive information to an encrypted file one time and lost my keyfile, I found it from a different drive that I had put away but I'm not sure if the attributes or metadata were wrong or anything but long and short of it is I lost the encrypted data. After that, I told myself never to bother with them again. That is why you will never see me using keyfiles in anything that I have. Just plain passwords are fine and more reliable for me. Okay, so I had this encrypted share, encrypted BTRFS unassigned SSD which held my Docker containers and had to punch in my password everytime the machine booted up. I chose to test this to make sure things run smoothly when I just want to leave it alone. Dockers won't boot until I enter my password and mount the array, okay I do that. Now my docker files drop a symlink and create data in the folder I had specified (weird but whatever I can delete it and put the symlink back, the symlink maps \mnt\disk\ssd\appdata\dockers to \config so my docker-compose scripts won't need too much editing. I realize what I need is some way to automatically mount my password protected array (which also mounts the unassigned SSD) so my docker containers never have that issue again starting up. I spent an hour trying to make it work, just copied the keyfile from /root out and used the same steps the guys are using for keyfile method but locally. I reboot, my new folder containing the key /unlock is gone and I've decided to quit fooling around and go to the forums to look for an answer. Get told its just a file and can be automated, I get tired of tinkering, wipe my array to XFS and because I lost my docker.img file (it was moved to SSD) had to re-install my USB. The USB drive now has activation issues which I'm holding on support for. This time, I'm doing normal XFS and normal BTRFS for SSD without any sort of encryption and I expect my docker containers to work exactly like they should. ...I still want that encryption though, maybe I'll play with it again because I'd hate to come back and want to change it over at this point. It just sucks that I wiped everything in order to simplify my array.
  20. Are there any guides for those of us who used a password over a keyfile?
  21. Running current stable release. So in order to get ready for some Docker containers I am deploying I wanted to get unraid off the default 443 SSL. I was successful doing HTTP from 80 to 8080 and figured it'd be the same with HTTPS so 443 to 4443. When I do that, the site stops responding and reboots do not fix it. Restarting nginx does not fix it root@fs:~# /etc/rc.d/rc.nginx restart Checking configuration for correct syntax and then trying to open files referenced in configuration... nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful Shutdown Nginx gracefully... Starting Nginx server daemon... root@fs:~# reboot Only thing I can do is edit /boot/config/ident.cfg back to 443, reboot (not restart nginx) and then I'm back in. Anyone else seeing this issue? Sidenote: I removed the bonding from NIC's (but kept the bridge even though only eth1 is in use) a few hours ago not sure if that might have anything to do with it.
  22. Perhaps name either systems email addresses differently in senders so you'll know who it's coming from, for example: unraid-location@domain.tld unraid-secondlocation@domain.tld
  23. Roger that! I'll leave well enough alone to 5AM daily in that case.