Jump to content

tucansam

Members
  • Content Count

    797
  • Joined

  • Last visited

Everything posted by tucansam

  1. Disregard. Found this: https://forums.unraid.net/topic/80103-keyfile-permissions/
  2. ETA On the other server (also encrypted) I just noticed that /root/keyfile isn't present until after I have booted and successfully entered the key. So it this a temporary file?
  3. Encrypted array. /root/keyfile just got deleted. How to recover? Thanks.
  4. Based on the pictures I found online, I would bet two weeks' pay that those are made by iStarUSA, or whoever the OEM is for iStarUSA. Except for the design on the drive doors, they look identical. I have been using the iStarUSAs for years, and they are great. I wouldn't hesitate going with the Silverstones.
  5. I used ssh (root) to move some files, and file permissions ended up messed up, so when I tried to run an rsync backup script, I had issues. 'docker safe new permissions' takes hours to run on my array, and I only needed the permissions of one share changed. 'chmod 777 *' didn't work. Failed with an 'invalid option' error. But 'chmod 777 1*' and then 'chmod 777 2*' and then 'chmod 777 A*' and then 'chmod 777 B*' etc etc etc worked fine. chown, same exact behavior. 'chown -R nobody:users *' failed with an 'invalid option' error. 'chown -R nobody:users A*' B* C* D* etc etc etc worked fine. What did I do wrong?
  6. High-Water, 750gb min free, automatically split as required, include all disks, export nfs private r/w to the other unraid server, export smb private with only me having r/w access. ETA I just manually cp'd the rest of /appdata with zero issues, I presume my use of recursive verbose on the initial command with the 99,999,999 deep plex directories is what blew the system up.
  7. Had to do the set-inform thing from the command line on two of my APs, but aside from that, the changeover to the new container seems to have gone reasonably smoothly.
  8. I'm at 6.6.6 and have been for a while. Today I had an issue where the GUI became unresponsive, and then the shell did. There have been no updates to plugins or docker containers in the last three or four days, and everything has been running OK until tonight. Last thing I did was: # cp -Rv /mnt/disks/ssd/* /mnt/user/backups/ssd-3-4-19 largely in order to backup my 'appdata' directory. This command ran fine for hours (appdata is huge thanks to Plex's inefficiency) and, when I came back to check on it, the shell was fozen (^C had no effect) and the GUI was too. I was able to ssh in a second time and 'htop' would not work (blank screen after execution). Another ssh session, 'top' wouldn't work, blank screen. Another ssh session, 'uptime' showed a sysload of 49 across the board. 'ps -ef | more' hung (waited 10 minutes and it didn't display anything). Now 'reboot' has also hung, I got the system going down message but the system is still powered on and I can still ssh in (GUI is inop, array is down). At this point I'm going to simply cross my fingers and pull the plug, and cancel the subsequent parity check. 'diagnostics' also had no effect, I waited almost 20 minutes and it never completed.
  9. I have never even thought of this. I will have to look into this more. I have often feared what would happen if one of my primary application disks failed, never even thought of putting it in the array so it is protected.
  10. Here's a dumb question, for which I think I already know the answer, but I'm going to ask anyway. Once all other disks in the array have been converted to encrypted volumes, is there any necessary action to take on the parity drive(s)? IE, can they remain their default partition format of GPT, or do they need to be encrypted also?
  11. Something doesn't look right, but I can't put my finger on it 😆
  12. Thank you, I sure will. I've got 60 hours left on a data move from one disk to the others, so I can convert the final disk in the array to an encrypted volume. So in a couple of days I'll have one more stop/start cycle to do, and will report back. Not busy at all. Its running plex, unifi, headphones, sonarr, and sab. Right now the load average is hovering around 8-9 because of the huge rsync job, but its usually between 1-2 if sab or sonarr is busy, and less than 1 when its idle doing nothing. i7-2600 with 16gb ram.
  13. Yes, just stopping the Recycle Bin, not actually shutting down the server. I'll update, thanks!
  14. Manually stopping the Recycle Bin hangs the GUI. Running 'diagnostics' from the shell prompt hangs. login as: root root@192.168.0.5's password: Last login: Wed Feb 27 05:40:10 2019 from 192.168.0.53 Linux 4.18.20-unRAID. root@ffs2:~# cd / root@ffs2:/# ls bin/ dev/ home/ lib/ mnt/ root/ sbin/ tmp/ var/ boot/ etc/ init@ lib64/ proc/ run/ sys/ usr/ root@ffs2:/# diagnostics Starting diagnostics collection... (Nothing ever happens beyond this) Here is what I can get: root@ffs2:~# tail /var/log/syslog Feb 27 02:49:17 ffs2 nginx: 2019/02/27 02:49:17 [error] 19871#19871: *834072 upstream timed out (110: Connection timed out) while reading upstream, client: 192.168.0.53, server: , request: "POST /plugins/unassigned.devices/UnassignedDevices.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "192.168.0.5", referrer: "http://192.168.0.5/Main" Feb 27 05:39:39 ffs2 kernel: mdcmd (181): spindown 12 Feb 27 05:40:10 ffs2 sshd[19803]: Accepted password for root from 192.168.0.53 port 50084 ssh2 Feb 27 05:46:02 ffs2 sshd[24417]: Accepted password for root from 192.168.0.53 port 50161 ssh2 Feb 27 05:46:49 ffs2 ool www[21531]: /usr/local/emhttp/plugins/recycle.bin/scripts/rc.recycle.bin 'empty' Feb 27 05:46:51 ffs2 Recycle Bin: User: Recycle Bin has been emptied Feb 27 05:46:54 ffs2 ool www[24117]: /usr/local/emhttp/plugins/recycle.bin/scripts/rc.recycle.bin 'update' Feb 27 05:46:54 ffs2 Recycle Bin: Stopping Recycle Bin Feb 27 05:48:42 ffs2 sshd[26830]: Accepted password for root from 192.168.0.53 port 50197 ssh2 Feb 27 05:48:54 ffs2 nginx: 2019/02/27 05:48:54 [error] 19871#19871: *881430 upstream timed out (110: Connection timed out) while reading upstream, client: 192.168.0.53, server: , request: "POST /update.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "192.168.0.5", referrer: "http://192.168.0.5/Settings/RecycleBin" root@ffs2:~#
  15. How do I do that from the command line? Usually I can still ssh into the machine, its just the GUI that totally goes tits up.
  16. The only one that isn't up to date is Dynamix System Stats, and that's because both servers are running 6.6.6 and not the beta.
  17. OK, thanks, good call. Its in the middle of a 90+ hour copy from one disk to another so I can finally convert all disks to encrypted volumes, but when its done with that I will stop the recycle bin and attempt to stop the server. Assuming it doesn't work, and it still hangs, the GUI freezes completely and thus I cannot run diags to post. Last time it happened, I ssh'd in and ran 'tail /var/log/syslog' and the last few entries were typical stuff not indicative of any errors. I haven't had unraid fail to shut down since the 5.x days (when a hard reset was the only way I could do it, literally) but not this has become worrisome again.
  18. Anytime I try to stop my server, it hangs at "Stopping Recycle Bin" in the bottom line of the web GUI, where information is displayed. Only thing that works at that point is a 'reboot' issued from the shell, which of course then spawns a parity check.
  19. Is there a way to change the filesystem to an encrypted one on disks outside the array, ie, those mounted with Unassigned Devices? I have an SSD for docker use, and a spinner for scratch/downloads, and would like to encrypt both.
  20. I am in the process of using unbalance and rsync (directly) to move data around a 14-drive array so I can slowly convert each disk to encrypted. I'm about two weeks into the ordeal and I'm just over halfway done. My 8TB disks are all that's left, and they are taking 50-60 hours per disk to copy data off of them and onto other member disks. After this server is done, I have a second one to convert, in the same way. I am averaging 35MB/s copy speeds. I have set up the write method as per other threads, to attain the best possible speeds. What else can I do to improve speeds when copying from one disk to another? I believe I read something about disabling parity.....
  21. I'm preclearing at 38MB/s on USB2.0, 33% pre-read done in 19 hours 26 minutes..... I'll let you guys know next month how it went.
  22. Guys, I am having some interesting problems. I am using this script: https://github.com/laurent22/rsync-time-backup to back up select shares from my primary server to a backup server. Its been going well for over a year, however I confess that, even after reading the page referenced above, I don't know 100% how the damn things works. I thought it was doing incremental backups, but examining files in each of the created directories (each invocation of the script generates a unique directory based on date and time), shows nearly complete lists of all files from all shares. I think many of them are technically symlinks. I have found problems using either Unbalance, or Windows Explorer, to move files. If I select a directory that should contain a small number of files, and right-click-properties in Windows explorer, it spends many minutes counting many tens of thousands of files. If I try Unbalance, I either get errors that there is not enough free space to move files (there is), or, once I fixed that, Unbalance now tells me there are 876534 hours left to move the files, and that number only increased over time. So my first question is: is anyone else using the above mentioned script, or have you used it in the past? My second question is: does anyone have any recommendations for a method to periodically back up one server to another? I tried looking at various dockers and apps, generally involving cloud backup of some kind, hoping I could adapt it to server-to-server use, to no avail. I want to help protect against bit rot by maintaining several full copies of important data, and then creating new backups of only changed files. This will help save space, but also if I discover a family picture is now corrupt, I can go back to several dates' worth of backups and find a version that is corruption free. If anyone has any other general suggestions, I'm all ears.
  23. Same result, same output, when typed by hand. Good suggestion though.
  24. Not even sure if its possible to preclear a USB drive, although I don't see why not. Aside from heat, are there any issues? I figure its probably best practice to test a drive a bit before violating the warranty. What's the standard procedure here? Thanks.
  25. Yessir, I copied-and-pasted directly from your post. I do not have any fancy shells, just plain vanilla unraid with few modifications. Right now I am using rsync by hand to move data, but its nowhere near as elegant as your plugin.