Trunkton

Members
  • Posts

    59
  • Joined

  • Last visited

Everything posted by Trunkton

  1. Hey BLKMGK, Thanks for commenting I see where you're coming from. Two reasons: -I get control of how that data is disseminated in case I choose to change it, something I do not get without encryption. Now I have the flexibility to put the password somewhere online, internal on the network or secondary USB called "Encryption Key" (Label maker n all). -When a drive gets pulled for RMA, recycled, gifted or resold, I sleep a lot better knowing things are not wide open searchable. I have a hell of a time with Dockers working on unRAID by having to manually do the unlock. The answer I suppose is prevent the Dockers from auto-start but then why am I trying to automate anything if I have to jump on the box to do everything after a power outage event for example (which do happen but the UPS is there to prevent the worst). If I'm away from the house, someone powers on the box I don't wanna have to spend 30 minutes with someone unfamiliar doing support to get content I may possibly want remote access to, back.
  2. I hate that I never tried this before wiping my array >_< Auto-mount works. I am gonna copy things back in, restore the symlink and try my containers out again. Only way this can all go sideways now is if after all that the docker containers attempt to boot before the array does or something which I didn't plan for.
  3. THANK YOU!!! I am adapting this to simply: # auto unlock array cp -f /boot/config/keyfile /root/keyfile I copied the mounted keyfile under /root out to /boot/config/keyfile and placed this as first thing in go script. Hope it works, rebooting now. My reasons for not having to use some other devices are simple, I still want encryption and automatic mounting without wrecking my docker containers on my encrypted and unassigned SSD and if I ever want, can just get a USB drive out and plug it into unraid as "Encryption key" or something if I need that level of safety. UPDATE: It worked!
  4. UPDATE: I rebooted and tried my key again, it works! I am not sure what happened, one attempt with my invalid key then it won't take the good key until you reboot. Just glad it works and I don't have to hold for support. Emailed to close the ticket.
  5. It will be Pro after the licensing issues get fixed, on trial at the moment.
  6. It had been moved to the unassigned SSD which I had to wipe, there were no copies left unfortunately.
  7. Docker.img and the VM files first seeded to the array were moved manually to unassigned SSD (instead of copied) which meant even with my wiped array I couldn't use Docker because that file wasn't present in the array or SSD any longer. I googled for the file to see if there was mention of it on the forums here with nothing, only way that I know to get it back is to redo things from scratch. I power the system down, wipe the USB, redo it and get an error from the keyfile that my GUID isn't recognized anymore. I'm not sure if wiping it changed it or what but I will say this. I attempted to activate with the wrong key at first, but tried my current one (which I forgot I had) and same problem with activation. Now the license issue is in the hands of support for further action which I trust will be resolved it just sucks I have to wait it out now.
  8. A little bit about my setup before starting, used to be FreeNAS then I started using Ubuntu Server + Docker (ZFS) and that was great but I wanted a GUI to tie it all together more elegantly which unraid does. Backed up to an external device, wiped ZFS share, redid it to unraid and last night after 3 days finally got to play with Docker since my media has been restored. I didn't use the GUI Dockers, just docker-compose and a bit of editing to match new mount paths. Before I say this next part, I'll tell you right now I have a bad experience with keyfiles. I saved a lot of sensitive information to an encrypted file one time and lost my keyfile, I found it from a different drive that I had put away but I'm not sure if the attributes or metadata were wrong or anything but long and short of it is I lost the encrypted data. After that, I told myself never to bother with them again. That is why you will never see me using keyfiles in anything that I have. Just plain passwords are fine and more reliable for me. Okay, so I had this encrypted share, encrypted BTRFS unassigned SSD which held my Docker containers and had to punch in my password everytime the machine booted up. I chose to test this to make sure things run smoothly when I just want to leave it alone. Dockers won't boot until I enter my password and mount the array, okay I do that. Now my docker files drop a symlink and create data in the folder I had specified (weird but whatever I can delete it and put the symlink back, the symlink maps \mnt\disk\ssd\appdata\dockers to \config so my docker-compose scripts won't need too much editing. I realize what I need is some way to automatically mount my password protected array (which also mounts the unassigned SSD) so my docker containers never have that issue again starting up. I spent an hour trying to make it work, just copied the keyfile from /root out and used the same steps the guys are using for keyfile method but locally. I reboot, my new folder containing the key /unlock is gone and I've decided to quit fooling around and go to the forums to look for an answer. Get told its just a file and can be automated, I get tired of tinkering, wipe my array to XFS and because I lost my docker.img file (it was moved to SSD) had to re-install my USB. The USB drive now has activation issues which I'm holding on support for. This time, I'm doing normal XFS and normal BTRFS for SSD without any sort of encryption and I expect my docker containers to work exactly like they should. ...I still want that encryption though, maybe I'll play with it again because I'd hate to come back and want to change it over at this point. It just sucks that I wiped everything in order to simplify my array.
  9. Are there any guides for those of us who used a password over a keyfile?
  10. Running current stable release. So in order to get ready for some Docker containers I am deploying I wanted to get unraid off the default 443 SSL. I was successful doing HTTP from 80 to 8080 and figured it'd be the same with HTTPS so 443 to 4443. When I do that, the site stops responding and reboots do not fix it. Restarting nginx does not fix it root@fs:~# /etc/rc.d/rc.nginx restart Checking configuration for correct syntax and then trying to open files referenced in configuration... nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful Shutdown Nginx gracefully... Starting Nginx server daemon... root@fs:~# reboot Only thing I can do is edit /boot/config/ident.cfg back to 443, reboot (not restart nginx) and then I'm back in. Anyone else seeing this issue? Sidenote: I removed the bonding from NIC's (but kept the bridge even though only eth1 is in use) a few hours ago not sure if that might have anything to do with it.
  11. Perhaps name either systems email addresses differently in senders so you'll know who it's coming from, for example: [email protected] [email protected]
  12. Roger that! I'll leave well enough alone to 5AM daily in that case.
  13. Yep! Had an idea that something like this should exist, then search it out and find Squid's all ready on the case!
  14. Thank you!!! I was getting annoyed with plugins continuously needing handholding for me to take them through an update >_< If possible, would love to see update checks every few hours if that is okay or even better when the app somehow triggers to say that an update is available just to do it at that time. You make a real difference with this program thanks for sharing sir!
  15. Appreciate this info! I used it to update my own docker compose as well. Just wondering if I will break things by doing this or if its better left alone? Fresh unraid install today, all plugins current. You are using pip version 10.0.1, however version 18.1 is available. You should consider upgrading via the 'pip install --upgrade pip' command.
  16. Beautiful script! Thanks so much for this. I remember trying unraid once many years ago when reiserfs was the primary option. Running pre-clears was CLI only and no where near as elegant as you've made it.
  17. Can you please explain what you mean by min allocation size? I like to learn. See here: http://lime-technology.com/forum/index.php?topic=18977.0 & here for a more descriptive meaning http://lime-technology.com/wiki/index.php/Plugin/webGui/Share_Settings - It's actually called "Min Space" and without that setting one can fill up a disk and jam up the whole share from adding any more files. The value for 20GB is "20000000".
  18. I still have a Pro license and will be checking back periodically to see if 5 comes out of beta with further refinements. Reversing the data migration from partial unraid/whs v1 to WHS 2011 (7/12TB reiserfs data) took a couple days, however now I am all setup! The mechanism of getting files off of ReiserFS to NTFS was either through network file transfer or using windows to format a disk, then rsync copying all data over for inclusion. I used LinuxMint and copied files with rsync (a better alternative to the 'cp' command when doing large file moves) My current experience with unraid where needed has been negative despite my best efforts. Purchased WHS 2011 OEM for $60 + running StableBit's DrivePool add-in which has just come out of beta last week. The good thing there is I could either move files into \\shares or straight to "E:" (pool share) without worrying about file sizes or filling single disks unknowingly then getting errors. Every night it balances the files across all drives so that none are full. My posts: (My short tut) Mount NTFS disks to copy data from with full character support - http://lime-technology.com/forum/index.php?topic=5904.msg169857#msg169857 (Self-resolved) 5b12 one drive full to capacity, can't access shares anymore - http://lime-technology.com/forum/index.php?topic=19060.msg170247 The game-ender If learning about setting the shares "Min allocation size" to 20GB early on it would've been better to stay with unraid. I got frustrated at the lack of support response and figured it was time to cut my losses (Informational) Duplicate files by using /mnt/diskX instead of /mnt/user - http://lime-technology.com/forum/index.php?topic=19162.msg171742
  19. Both methods fail on b12. Related forum thread was 4 years ago so the commands error out.
  20. If you've got unraid. Script is installed @ http://tower:8080/dupe_files Also see: http://lime-technology.com/forum/index.php?topic=2459
  21. Hey guys, I just recognized that putting things in /mnt/diskX is poor form and shouldn't be done unless there's a good reason. Perhaps 1TB data is in /mnt/diskX instead of /mnt/user because it wasn't adding to /mnt/user without the "Min Space" configured. If I set the "min space" this wouldn't have been required. Could I just go into /mnt/diskX and "cp -ruvpn" the items over to /mnt/user? All files currently on my unraid server should be part of /mnt/user.
  22. To get this going again, instead of preserving most of the /boot/config files, I kept only network.cfg and my key file. Having a screenshot of the Disk ID's I put them back to what they were. Now it's all up and running again. I'm going to slowly put the add-ons back in and hope this doesn't happen again. I was far from the server when this was posted so the syslog couldn't be posted at the time. I am still using 5b12. Also removed 12GB of 16GB memory as it is unneeded.
  23. Hey guys, I'm thinking about buying an Intel NIC to pop into my Supermicro board just to move to stable v4 but before attempting that. Wanted to see if there are any other ideas. Currently 3x 2TB's installed to array w/ additional 750GB cache drive. I don't have a parity disk yet but some data is on cache drive. 1/3 HDD's are completely full at 100%. All drives passed pre_clear 2 weeks ago. Ran "reiserfsck --check /dev/mdX" on each disk without errors. Took a screenshot of existing setup, wiped USB key and re-installed only preserving files in 'config' directory. Re-added disks as they were and rebooted... It seems after I start the array the web server client still dies. Left it for 3 days after starting array without rebooting now and still can't get into my general shares, just 2/3 "diskX" shares are accessible. The last thing I seen on the webpage was "Resizing" (not the drive at capacity) beside the others "Mounted". Just curious if anyone's seen odd issues like this. I was (and am) still in the process of moving all my data away from WHS onto unraid and purchased a Pro key 2 weeks ago.
  24. Extremely useful app, good work q! The GUI is amazing, you've made it easy to use & powerful. Good job. Just wanted to mention that copying files in shell from the drives (former WHS folders) crashed my server 2-3 times til manually learning how to mount/share the device. Crashed as in I couldn't get to the http or telnet interface anymore but it was still reachable from console to reboot. I think it might not have liked some of the filenames. To solve it I installed ntfs-3g with unraid pkg manager. First created /mnt/user/tmp/wdcavy2972 and then: mount -r -o nls=utf8 -t ntfs-3g /dev/sdg1 /mnt/user/tmp/wdcavy2972 File copies have been running the past 12 hours without crashing Code I've used to move files. Would have used mc but the casing difference means it wouldn't merge together: cp -ruvpn /mnt/user/tmp/wdcavy2972/Videos/* /mnt/user/videos Running 5.0beta12 w/ S.N.A.P ver: 5.08