Jump to content

tucansam

Members
  • Content Count

    776
  • Joined

  • Last visited

Community Reputation

11 Good

About tucansam

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. tucansam

    Preclear before shucking?

    I'm preclearing at 38MB/s on USB2.0, 33% pre-read done in 19 hours 26 minutes..... I'll let you guys know next month how it went.
  2. Guys, I am having some interesting problems. I am using this script: https://github.com/laurent22/rsync-time-backup to back up select shares from my primary server to a backup server. Its been going well for over a year, however I confess that, even after reading the page referenced above, I don't know 100% how the damn things works. I thought it was doing incremental backups, but examining files in each of the created directories (each invocation of the script generates a unique directory based on date and time), shows nearly complete lists of all files from all shares. I think many of them are technically symlinks. I have found problems using either Unbalance, or Windows Explorer, to move files. If I select a directory that should contain a small number of files, and right-click-properties in Windows explorer, it spends many minutes counting many tens of thousands of files. If I try Unbalance, I either get errors that there is not enough free space to move files (there is), or, once I fixed that, Unbalance now tells me there are 876534 hours left to move the files, and that number only increased over time. So my first question is: is anyone else using the above mentioned script, or have you used it in the past? My second question is: does anyone have any recommendations for a method to periodically back up one server to another? I tried looking at various dockers and apps, generally involving cloud backup of some kind, hoping I could adapt it to server-to-server use, to no avail. I want to help protect against bit rot by maintaining several full copies of important data, and then creating new backups of only changed files. This will help save space, but also if I discover a family picture is now corrupt, I can go back to several dates' worth of backups and find a version that is corruption free. If anyone has any other general suggestions, I'm all ears.
  3. tucansam

    [Plug-In] unBALANCE

    Same result, same output, when typed by hand. Good suggestion though.
  4. Not even sure if its possible to preclear a USB drive, although I don't see why not. Aside from heat, are there any issues? I figure its probably best practice to test a drive a bit before violating the warranty. What's the standard procedure here? Thanks.
  5. tucansam

    [Plug-In] unBALANCE

    Yessir, I copied-and-pasted directly from your post. I do not have any fancy shells, just plain vanilla unraid with few modifications. Right now I am using rsync by hand to move data, but its nowhere near as elegant as your plugin.
  6. tucansam

    [Plug-In] unBALANCE

    Thanks! The first command returns a zillion pages of this: %A|%U:%G|%F|%n %A|%U:%G|%F|%n %A|%U:%G|%F|%n %A|%U:%G|%F|%n %A|%U:%G|%F|%n %A|%U:%G|%F|%n %A|%U:%G|%F|%n %A|%U:%G|%F|%n %A|%U:%G|%F|%n %A|%U:%G|%F|%n %A|%U:%G|%F|%n %A|%U:%G|%F|%n %A|%U:%G|%F|%n %A|%U:%G|%F|%n %A|%U:%G|%F|%n And the second one returns this: root@ffs1:~# find "/mnt/disk7/backups/." ! -name . -prune -exec du -bs {} + 1594424381632 /mnt/disk7/backups/./scripted
  7. tucansam

    [Plug-In] unBALANCE

    I can get the planning phase to work fine on my second server, but the "move" and "copy" buttons are still not available to click. Here is the output: -- I: 2019/02/11 16:49:05 app.go:51: unbalance v5.4.0-1094-9eff134-v2018.09.18a starting ... I: 2019/02/11 16:49:05 app.go:59: No config file specified. Using app defaults ... I: 2019/02/11 16:49:05 server.go:77: Starting service Server ... I: 2019/02/11 16:49:05 server.go:94: Serving files from /usr/local/emhttp/plugins/unbalance I: 2019/02/11 16:49:05 array.go:46: starting service Array ... I: 2019/02/11 16:49:05 server.go:155: Server started listening https on :6238 I: 2019/02/11 16:49:05 planner.go:52: starting service Planner ... I: 2019/02/11 16:49:05 core.go:101: starting service Core ... I: 2019/02/11 16:49:05 server.go:145: Server started listening http on :6237 W: 2019/02/11 16:49:05 core.go:116: Unable to read history: open /boot/config/plugins/unbalance/unbalance.hist: no such file or directory I: 2019/02/11 16:49:05 app.go:73: Press Ctrl+C to stop ... I: 2019/02/11 16:52:40 core.go:175: Sending config I: 2019/02/11 16:52:40 core.go:180: Sending state I: 2019/02/11 16:52:40 core.go:190: Sending storage I: 2019/02/11 16:53:58 planner.go:70: Running scatter planner ... I: 2019/02/11 16:53:58 planner.go:84: scatterPlan:source:(/mnt/disk7) I: 2019/02/11 16:53:58 planner.go:86: scatterPlan:dest:(/mnt/disk1) I: 2019/02/11 16:53:58 planner.go:86: scatterPlan:dest:(/mnt/disk2) I: 2019/02/11 16:53:58 planner.go:86: scatterPlan:dest:(/mnt/disk3) I: 2019/02/11 16:53:58 planner.go:86: scatterPlan:dest:(/mnt/disk4) I: 2019/02/11 16:53:58 planner.go:86: scatterPlan:dest:(/mnt/disk6) I: 2019/02/11 16:53:58 planner.go:86: scatterPlan:dest:(/mnt/disk5) I: 2019/02/11 16:53:58 planner.go:525: planner:array(7 disks):blockSize(4096) I: 2019/02/11 16:53:58 planner.go:527: disk(/mnt/disk1):fs(btrfs):size(3000592928768):free(142756143104):blocksTotal(732566633):blocksFree(34852574) I: 2019/02/11 16:53:58 planner.go:527: disk(/mnt/disk2):fs(btrfs):size(3000592928768):free(163589238784):blocksTotal(732566633):blocksFree(39938779) I: 2019/02/11 16:53:58 planner.go:527: disk(/mnt/disk3):fs(btrfs):size(3000592928768):free(186394562560):blocksTotal(732566633):blocksFree(45506485) I: 2019/02/11 16:53:58 planner.go:527: disk(/mnt/disk4):fs(btrfs):size(3000592928768):free(187017498624):blocksTotal(732566633):blocksFree(45658569) I: 2019/02/11 16:53:58 planner.go:527: disk(/mnt/disk5):fs(btrfs):size(4000786976768):free(2186855804928):blocksTotal(976754633):blocksFree(533900343) I: 2019/02/11 16:53:58 planner.go:527: disk(/mnt/disk6):fs(btrfs):size(3000592928768):free(1202956013568):blocksTotal(732566633):blocksFree(293690433) I: 2019/02/11 16:53:58 planner.go:527: disk(/mnt/disk7):fs(btrfs):size(2000398901248):free(186992017408):blocksTotal(488378638):blocksFree(45652348) I: 2019/02/11 16:53:58 planner.go:356: scanning:disk(/mnt/disk7):folder(backups) W: 2019/02/11 17:02:06 planner.go:367: issues:not-available:(exit status 1) W: 2019/02/11 17:02:08 planner.go:383: items:not-available:(exit status 1) I: 2019/02/11 17:02:08 planner.go:466: scatterPlan:No items can be transferred. I: 2019/02/11 17:02:08 planner.go:493: scatterPlan:ItemsLeft(0) I: 2019/02/11 17:02:08 planner.go:494: scatterPlan:Listing (7) disks ... I: 2019/02/11 17:02:08 planner.go:508: ========================================================= I: 2019/02/11 17:02:08 planner.go:509: disk(/mnt/disk1):no-items:currentFree(142.76 GB) I: 2019/02/11 17:02:08 planner.go:510: --------------------------------------------------------- I: 2019/02/11 17:02:08 planner.go:511: --------------------------------------------------------- I: 2019/02/11 17:02:08 planner.go:512: I: 2019/02/11 17:02:08 planner.go:508: ========================================================= I: 2019/02/11 17:02:08 planner.go:509: disk(/mnt/disk2):no-items:currentFree(163.59 GB) I: 2019/02/11 17:02:08 planner.go:510: --------------------------------------------------------- I: 2019/02/11 17:02:08 planner.go:511: --------------------------------------------------------- I: 2019/02/11 17:02:08 planner.go:512: I: 2019/02/11 17:02:08 planner.go:508: ========================================================= I: 2019/02/11 17:02:08 planner.go:509: disk(/mnt/disk3):no-items:currentFree(186.39 GB) I: 2019/02/11 17:02:08 planner.go:510: --------------------------------------------------------- I: 2019/02/11 17:02:08 planner.go:511: --------------------------------------------------------- I: 2019/02/11 17:02:08 planner.go:512: I: 2019/02/11 17:02:08 planner.go:508: ========================================================= I: 2019/02/11 17:02:08 planner.go:509: disk(/mnt/disk4):no-items:currentFree(187.02 GB) I: 2019/02/11 17:02:08 planner.go:510: --------------------------------------------------------- I: 2019/02/11 17:02:08 planner.go:511: --------------------------------------------------------- I: 2019/02/11 17:02:08 planner.go:512: I: 2019/02/11 17:02:08 planner.go:508: ========================================================= I: 2019/02/11 17:02:08 planner.go:509: disk(/mnt/disk5):no-items:currentFree(2.19 TB) I: 2019/02/11 17:02:08 planner.go:510: --------------------------------------------------------- I: 2019/02/11 17:02:08 planner.go:511: --------------------------------------------------------- I: 2019/02/11 17:02:08 planner.go:512: I: 2019/02/11 17:02:08 planner.go:508: ========================================================= I: 2019/02/11 17:02:08 planner.go:509: disk(/mnt/disk6):no-items:currentFree(1.20 TB) I: 2019/02/11 17:02:08 planner.go:510: --------------------------------------------------------- I: 2019/02/11 17:02:08 planner.go:511: --------------------------------------------------------- I: 2019/02/11 17:02:08 planner.go:512: I: 2019/02/11 17:02:08 planner.go:508: ========================================================= I: 2019/02/11 17:02:08 planner.go:509: disk(/mnt/disk7):no-items:currentFree(186.99 GB) I: 2019/02/11 17:02:08 planner.go:510: --------------------------------------------------------- I: 2019/02/11 17:02:08 planner.go:511: --------------------------------------------------------- I: 2019/02/11 17:02:08 planner.go:512: I: 2019/02/11 17:02:08 planner.go:516: ========================================================= I: 2019/02/11 17:02:08 planner.go:517: Bytes To Transfer: 0B I: 2019/02/11 17:02:08 planner.go:518: --------------- I: 2019/02/11 17:02:08 planner.go:466: scatterPlan:No items can be transferred. -- This is the line that is getting me confused: I: 2019/02/11 17:02:08 planner.go:466: scatterPlan:No items can be transferred. Not sure how to troubleshoot this, the health of the array is fine. Thanks.
  8. tucansam

    Out of memory

    Fix Common Problems keeps complaining that my server is out of memory. I've only got 12GB, and maybe it is time to bump that up, but I've not added anything new, docker or plug-in wise, in over a year, and this is a recent drama. Diags attached. ffs2-diagnostics-20190206-1448.zip
  9. tucansam

    XFS, encryption, questions

    First server conversion is going swimmingly. Second sever, I cannot get Unbalance two function, and I may need to move files off one disk at a time by hand. Question: during the upgrade process, say I'm installing a larger disk in an array with no encrypted disks at the moment, can I make the new disk encrypted, and have parity rebuild the data on the new (encrypted) disk?
  10. tucansam

    FreeDOS VM

    No. Gave up, for now. Probably going to order a 386, 486, Pentium Pro, and Pentium III from ebay at some point and run what I need on actual hardware.
  11. tucansam

    [Plug-In] unBALANCE

    On my primary server, this works great!!! On my secondary, it seems to hang at "PLANNING: Checking issues..." and never seems to finish, at least, the "Move" and "Copy" buttons never become available.
  12. tucansam

    XFS, encryption, questions

    I'm not sure what snapshots are, nor do I really know how to use checksums.
  13. tucansam

    XFS, encryption, questions

    I see that xfs -- encrypted and reiserfs -- encrypted are both options for a disk format, in addition to btrfs. Any reason to choose one over the other?
  14. tucansam

    XFS, encryption, questions

    First question: a year or so ago, BTRFS was still experimental, or newly implemented (can't remember exactly) and XFS was generally accepted as being the filesystem to use for "mission critical" servers. Is this still the case, or is BTRFS now considered 100% GTG? Re: encryption. I have a 15 disk array that is very full. There is enough room on the drives to manually copy data over in such as way that one member disk could be completely emptied, changed FS if needed (currently XFS), encrypted, and then data copied back over. This could be done one disk at a time until all disks were encrypted, without me having to move data over to tapes or a second server. Is this an appropriate way to accomplish this task? Thanks.
  15. root@ffs2:~# cat /boot/config/plugins/dynamix.apcupsd/dynamix.apcupsd.cfg SERVICE="enable" UPSCABLE="usb" CUSTOMUPSCABLE="" UPSTYPE="usb" DEVICE="" BATTERYLEVEL="10" MINUTES="10" TIMEOUT="0" KILLUPS="no" root@ffs2:~# Nearly new Tripp Lite, tons and tons of capacity, with my unraid server being the only item plugged in. Its a 17 drive system though, but usually a drive or two are spun up, at the most, so there isn't a huge power draw. I just tripped a breaker, and heard the UPS complain. I ran out to the panel and reset the breaker (15 amp circuits FOAD), and kept doing what I was doing. Few hours later I noticed the unraid server was powered down. Sweet, I hadn't tried the UPS since I installed it (not sure how to do it without risking hard-shutdowns). I figured unraid must have shutdown shortly after the UPS went to battery, and powered it back on. Parity check! Logged in and system was doing a parity check (none scheduled), so a hard shutdown had occurred. Pic attached of UPS screen. I don't know what to configure or how to (safely) test. Any advice welcome. Thanks.