landS

Members
  • Posts

    824
  • Joined

  • Last visited

Everything posted by landS

  1. Good day Community, I am tired of my Crashplan (To Crashplan Cloud) backup breaking via a Windows machine across the network. I was planning on moving my Crashplan to Ubuntu 14 LTS with RDP/VNC Gui in XEN, but it has moments of flakiness: http://lime-technology.com/forum/index.php?topic=33396.0 I have NOT tried out RDP/VNC Ubuntu in KVM. Likely would go with https://lime-technology.com/forum/index.php?topic=35858.0 Of course now I see that a Docker of Ubuntu with RDP Gui is available too: https://lime-technology.com/forum/index.php?topic=38932.0 So the questions are... what is the most stable way to have an always on Ubuntu VM to run just Crashplan (with Gui)? what is the easiest to move VM between different machines? The crashplan docker is out as I want the engine and engine manager to be one and the same. Thanks crew
  2. Does this Docker take into consideration https://github.com/phusion/baseimage-docker (A minimal Ubuntu base image modified for Docker-friendliness)
  3. We need a donation button --- Thanks as usual Gary, you saved me a great deal of angst!
  4. On my backup server I miffed 1 automated script. Rather than running a robocopy from //tower1/share to //tower2/share I ran to //tower2/Disk2/share... daily... for MONTHS. Now Disk 2 has 2.4 GB out of 4 TB remaining. I caught this when getting ready to add a crashplan docker. 614b, XFS on all 4 TB HGST data/parity. BTFRS pool on Cache. Is it OK to run a robocopy from //tower2/Disk2/share to //tower2/Disk3/share?
  5. Thanks Gary, I will check via preclear -t prior to a preclear -n.
  6. Thanks Guys for the sanity check, Given my high confidence in HGST, that I now have about 180 hours of preclear activity if no reports), and the STRONG urgent need for a backup of my data I will add this to my array (forcing a pre-clear signature via 'preclear -n' if necessary). If this disk fails, at least the data is protected by parity. This will give me time to obtain and preclear a backup 4 TB HGST disk.
  7. I am eager to use this 4 TB drive in a 2nd unraid mahcine as due to a recent hardware failure as my data is no longer backed up. The first prelcear reached 40% post-read when a blizzard took out our power for an extended period. The second preclear reached 98% post-read when I had to force the server down due to a mover mishap. This preclear will finish tomorrow evening (or so I hope). Given that the first 2 preclears reached post-read is it safe to trust the reports from the final preclear only... or should i really run 2 more full preclears? This is an HGST 4 TB drive. Thanks crew
  8. Yes - but what surprised me is that Mover scheduler setting did not persist when I set it to the 31st 1 - set to monthly/31st/apply, click on gui CACHE under global and click on Mover - setting was the 31st 2 - check this AM and mover under global back to daily 3 - set to the 1st, powercycle the machine, and under Global it still shows the 1st. After 4 TB disk is in the array I will delete the directories currently split between the Cache and the Data2 drive and re-robocopy. All should be good. A note to future readers: I am forcing this machine to do some non-standard use while I preclear a larger datadrive and recover from a bad drive (that actually passed 3 preclears). In normal use (at home, and multiple at work builds) unraid is rock solid!
  9. So.... EEK. Saved Mover schedule to monthly and 31st, checked change post apply (clicked on cache and back on mover under global) last night and all looks good. This AM it was back to the default. When no more disks are present Mover ignores minimum share space. I have my share set to 70 GB min and the disk currently sits at 22 GB free. This seems like somewhat scary beahviour, no? Putty into the system, logged in, typed killall rsync yet the mover kept chugging along. Attempted to stop array in GUI... it didn't Had to force power down - hopefully didn't screw up any files mid-move. Running a parity check given the forced shutdown - if any errors will run a 2nd parity check for sanity sake. edit: mover scheduler persists the 1st, doesn't persist the 31st. edit2: dang - 4 TB HD was 98% post read on 2nd preclear attempt... almost 80 hours of 2 nearly finished preclears... gosh dangit.
  10. Good idea - but I maintain the main server via IPMI. It was in a closet - now in a (dry) crawl.
  11. You can change the mover schedule in the webGUI --- found it settings/global/mover
  12. @bob - this solution works fine for me! Good idea @trurl - this is 6b12 - I missed that! Still exploring 6 and what you wrote did not register I will just set it for monthly (28 days out) - after all preclears are run I will reset it to nightly. THANK YOU ALL AGAIN!
  13. ***THIS**** A potentially easier thing would be to add a line that automatically sent you an e-mail reminding you to spin down the drive with MyMain GREAT IDEA!!! Time to search the forums for a how to!
  14. ok - leaning towards an attempt to disable mover scheduler for now, and re-enable mover after 3 preclears of 4 TB HGST. ... back to searching
  15. @trurl - the cache is a set of WD Red (lightly used, triple precleared) 1 TB disks in a BTRFS Raid-1 cache pool. While not protected by parity, the data is at least mirrored.
  16. My servers are both on an APC Smart UPS 1000. I moved the Windows machine to a cheaper cyberpower UPS (and purchased fresh batteries for the backup servers UPS) when I began getting the backup server up and running in-place a few weeks before the really bad power issues. It is the LAST time I mess with a cheap UPS and should have let the APC pull double duty for the windows/backup machine until I was done. The PSU is a Seasonic gold model - it should have been fine. I have tried another PSU in the machine, an external dock, and even an esata hookup to each of the disks - none of the disks are readable. This brownout also killed my: septic ejector, some LED light bulbs, the living room TV (on a belkin power conditioner), and our washing machine. This IS risky and I am not comfortable with this. The last parity check was my Jan 1st (all good) - EVERY parity check has been error free for years, the power went out on Feb 1st check, but manually relaunched and it should be done within 23 hours. I WILL shut it down if it has ANY errors, but 8% in and all looks good. My MAIN server has a fully precleared ready to go 4 TB spare - but I do NOT want to yank that for the new backup server. This backup machine has a brand new 4 tb hgst for parity, and currently is using a mix of older precleared drives for data. It would have gone into production with all new 4 TB hgst data drives, but for this opera.
  17. @gary - the drive is not terribly accessible. I am OK with pulling the plug but would prefer a slightly more elegant solution. @bjp - yup, this is what I have been doing. Thanks for the GREAT plug-in page BTW! However the machine only goes down 2-3 times a year (storm related) and it is easy to forget about non-automated tasks. Not the end of the world if it MUST be done manually. Thanks for the great help everyone!
  18. will do - moving over most important data to remaining array capacity. Will stop when the level is reached and add 4 TB when able (15% into first post-read)
  19. agreed - and in all honesty I did do a new config... so it could have been something other than the disabled disk. The only things that changed with new config 1 - disk 3 became disk 2 2 - disk 2 (disabled - no data) removed 3 - disk 4 removed (2 TB WD drive - spot to be taken by a 4 TB hgst) I do not really want to play with bad 3TB disk now - but willing to ship it off if any MOD wants to mess with it.
  20. Thanks crew - I am anxious as setting up this backup server timeline was accelerated by at least 2 months. I had planned on the backup server taking over the duty of BACKUP data and the main server taking over the duty of PRIMARY data. A bad brownout killed my main windows rig, the cyberpower UPS it was on, and none of the 8 drives (all 3 TB HGST) are readable now. My primary data lived here on raid-1 disks --- all backed up to my main unraid machine. This machine was to be decommissioned as a data storage device, but only after getting the backup unraid server up and running 100% This means that my main unraid machine now contains the only copy of my data. On this backup unraid machine all of my shares are setup to use the cache disk. My main server has NO cache disks.
  21. I have a single 4 TB fully precleared drive in my main server as a spare. It would be nice for this to spindown (automatically) as it is not used for anything other than a spare This server has no need for a cache drive --- But what about adding this as a cache drive and turning off cache for each of the shares? Will this prevent the disk from being used while at the same time allow for automatic spindown?
  22. 1 - manual spin down indicates that they do spin down but after a few moments they start spinning again. This is when they are in the array (i know out of array does not autospin down sadly). 2 - plugin dynamix updated and system restarted - issue appears to be gone. - thanks
  23. solved - 6b12 with a disabled disk in the array causes btrfs cache pool to require manual re-balance upon reboot. Remove disabled disk and all is good.
  24. Perhaps my google foo is weak... What happens when my xfs data drives fill up, but I still have 1 TB of raid-1 btrfs cache drive (6b12)? Will unraid allow me to transfer to the cache drive, but since the data drives are full (as per min free space set on share), prevent the mover from moving? (Note: I want to do this as I wait to finish pre-clearing a 4TB hgst data 3 times)