Jump to content

gundamguy

Members
  • Posts

    755
  • Joined

  • Last visited

Posts posted by gundamguy

  1. Will this prevent corruption if a drive red balls during the copy?

     

    After a bit of reading I think the answer is yes.

     

    Quotes from the Rsync Documentation.

     

    Under the -c (use Checksum) it says.

    Note that rsync always verifies that each transferred file was correctly reconstructed on the receiving side by checking a whole-file checksum that is generated as the file is transferred, but that automatic after-the-transfer verification has nothing to do with this option's before-the-transfer "Does this file need to be updated?" check.

     

    And under --remove-source-files it says.

    This tells rsync to remove from the sending side the files (meaning non-directories) that are a part of the transfer and have been successfully duplicated on the receiving side.

     

    So it seems that corruption would set off a redflag and the source file will not be deleted, unless I am missing something.

     

     

  2. I just started this process last night. I want to say thanks for the intro in how to make this change. With out your post I doubt I would have taken the effort to make the change to xfs.

     

    I asked a few questions in another thread and someone made a suggestion which I want to share because I believe it reduces the steps, and is a more powerful approach.

     

    To my knowledge the following code represents a simplified method to accomplish steps 6, 7 & 8 in one pass while also maintaining permissions, timestamps, owners, groups, symlinks, and device files using rsync. 

     

    rsync -av --progress --remove-source-files /mnt/diskX/ /mnt/diskY/

     

    Where disk X is the rfs disk you are copying to the newly formated xfs disk (disk Y).  For example I ran this last night on as /mnt/disk1/ /mnt/disk4/ copying disk1 to disk4

     

    -a means it's in "archive mode" which equals -rlptgoD (Recursive, Symlinks, Permissions, Timestamps, Groups, Owners, Device Files) Note if you want to Preserve extended attributes add a capital X to your call of rsync.

    -v puts it in verbose mode (verbose is not required)

    --progress shows you the progress of the copies as files are moving (Progress is not required)

    --remove-source-files : After the copy is done rsync will verify and then remove the source files. IMPORTANT do not use --remove-source-files if you have files currently being writen to the disk, this will result in corruption of the that file. You should do this anyway since your moving data from one disk to the other, but disable any plugins - dockers which write to that disk, and it's best to not write to shares that could be put on that disk while rsync is running.

     

    The end result should be empty directories on diskX and a duplication of diskX on diskY.

     

    Rsync is very powerful and there are way more things that can be done with it, so if you have questions or want to know more about what it can and can't do you can read more here.

     

    I think this is a much more elegant solution.

     

  3. Thanks again for the advice, I feel like I'm moving in the right direction now.

    I guess I need to figure out if I want to have a second unraid box or if I don't have enough data to warrent that yet and back up via an USB3 HDD attached outside of the array.

     

    Oddball question. Should you backup \flash ? In the event of a usbdrive failure I realize that a new key will be required (I have a backup of that anyway... because why not...) but is there any other settings or files which having a backup would help the process of getting up and running again? If not the whole flash are there important files we should be looking at backing up?

  4. With rsync, files are usually compares by size & modification time.

    So silent corruption may not cause the files to be rsynced to the backup.

     

    There's another strategy with the -c option which does a checksum compare rather then size/time.

    That would surely propagate the silent corruption.

     

    Accidental deletion protection. 

    In this case, for crucial files that need to be kept over a time period or 'deltas' there is the --link-dest option.

     

    This can be done locally disk to disk, or it can be done remotely if the remote server 'pulls' the files.

    With this option you can link the current destination directory to new name. All files recursively are linked from an old directory to a new directory before the rsync is executed.

    The rsync is then executed on the source to the new name. If there are any chances they are copied over. If there are no changes at all, the new directory looks like the old directory.

     

    This has the benefit of using 1x the space for the whole tree, then the space required for each file's change.

    I've used this to mirror source trees and carry deltas. It can be done daily, hourly, monthly whatever is choosen.

    You do this by managing the source directory name and destination directory name on the backup volume.

     

    This article has a good description of the process.

    http://goodcode.io/blog/easy-backups-using-rsync/

     

    I want to say thank you so much for this! Really good stuff! The link is also awesome and set me on the right path.

    My only problem is that My wife and I live in a small apartment, (just starting out), so I'm not sure that I can convince her that a second backup server is what we need. She's onboard with the primary one.  I could possibly put a remote server in at my parent’s house... but we'll have to see about that as well. Also I assume there are a few additional steps when performing back-ups across the internet?

     

  5. I've seen some people suggest using rsync to set an automated backup of your unraid box to another unraid box (or other linux box).

     

    If you use an automated rsync process isn't there potential for corruption to be mirrored into the backup?

     

    If there are goals in having a backup are to protect against the following right?

     

    [*]Software Errors – File system corruption or bit rot or some other sort of internal error which silently (or not so silently) corrupts your data. 

    [*]Hardware Failure – Protect against hard disk or other mechanical error

    [*]Human Error – Protect against accidental deletion or other human initiated mistakes which destroy your data.

     

    My concern with automated rsync processes is that that rsync will update files if they have changed from an earlier rsync run. If the reason the file has changed is silent corruption won’t that silent corruption be copied into the backup diminishing your ability to recover?

    Maybe rsync doesn't work that way... or maybe this can be avoided by properly configuring rsync?

     

    Any advice on how to properly do this would be much appreciated, because I've def seen people use the --delete command (which deletes files on the destination which don't exist on the source) Which totally removes the ability to recover from human error should you not catch your mistake before your automated process runs. Any other pitfalls to avoid?

     

  6. Just want to say I did a bit more digging on this, and it seems that there are a lot of dropped packets early on (after a reboot, maybe I should have mentioned that before, but the earlier numbers were after a reboot) and sense then has been really solid driving the ratio way down.

     

    Is there a reason why I would expect a higher percentage of drops early with a lower and lower number as time goes on?

  7. So on the plugin manager page it shows you the plugins you have installed. The icons are clickable. I assume this is so you can go to the gui php page but I'm not sure how to set it. Does that makes sense?

     

    A direct jump to the plugin's configuration page makes a great deal of sense.

     

    I am not a plugin developer (Yet) but this make absolute sense to me. I got really confused at first when I installed my first plugin, only to have to hunt though the menus to descover where it was located. Clicking the plugin icon or link seems way more intuitive.

  8. I might delve into this a bit more later, right now I'm showing Received: 992427 Drops: 59817 which is 6.02%... slightly higher then I expected... I guess it's not a huge concern. When i pinged Google 20 times I got 0% loss though... i suppose that 20 isn't really a great sample... but I digress.

     

    6% is a bit high, but if they are only coming through the WAN and not the LAN there may not be anything you can do about it.

     

    6% is unacceptable. TCP should operate with less than 1% packet loss unless something is wrong with the network. UDP service like VOIP is not relevant unless the server is running a UDP service. This is most likely due to a bad physical connection.

     

    Most of my traffic should be within my LAN... I'm only running Plex, and APCUPSD (networked with a windows pc on the same UPS but that's still within the lan...).

     

    Oh well, I'm not super worried but I am going to look into this more. It's possible something is configured incorrectly, or there is a bad connection somewhere.

  9. Packet loss isn't a major problem as long as its not excessive, but if you really want to diagnose it you can check out https://kb.meraki.com/knowledge_base/troubleshooting-packet-loss 

     

    On WAN connections (out to the internet), packet loss is going to happen.  Nothing you can do about it.  The internet is way too convoluted to expect 100% perfect transmission of 100% of packets 100% of the time.

     

    run ping google.com for a couple of hours and you'll see the odd packet dropped.

     

    If LAN packets are being dropped however the odds are most likely either a bad / flakely port on the switch / router or cabling issue.  Ping another computer in your house to see.

     

    Cabling you may or may not be able to anything about.  It could be as simple as a bad termination, or a kink in the wire or tie straps too tight (changing the electrical characteristics) (in an ideal world, tie straps are never used on data cables), or running too close to power wires. 

     

    Or, lightning could have induced a momentary RF surge and scrambled a packet or two.

     

    Net result:  If its not an issue, don't worry about it.  I would far rather have a dropped packet than for a corrupted packet to have been received and not recognized.

     

    My ifconfig is currently reporting 4315 dropped packets out of a total of 44707839 for a total loss of 0.00965%, and the computer is constantly downloading.  My other server which only contacts the internet for plugin updates, etc reports a loss of 0.00011%

     

    The OP's percentage is 6514 / 110009239 = 0.0059%  Hardly earth shattering.

     

    For comparision, for VOIP applications, packet  loss up to 20% is acceptable

     

    Fair enough...

     

    I might delve into this a bit more later, right now I'm showing Received: 992427 Drops: 59817 which is 6.02%... slightly higher then I expected... I guess it's not a huge concern. When i pinged Google 20 times I got 0% loss though... i suppose that 20 isn't really a great sample... but I digress.

  10. I also just noticed that I have a lot of dropped packets listed for eth0 receive, is there a way to troubleshoot why or how these packets are getting dropped?

     

    I'm only running plex when it comes to plugins.

  11. In KVM mode, there might be a cpu scaling driver issue with certain hardware combinations.  One of those drivers is called Intel-Pstate.  This is the chosen driver if your Intel cpu is Sandy Bridge (2011) or newer.  On my Haswell-class cpu (i7-4771) the Intel-Pstate driver is too sensitive and seems to keep the cpu frequency near the max frequency even when idle but occasionally it does scale the frequency down.

     

    You can disable the Intel-Pstate driver by editing your /boot/syslinux/syslinux.cfg and adding a intel_pstate=disable parameter to the append line below:

     

    ...

    label unRAID OS

      menu default

      kernel /bzimage

      append intel_pstate=disable initrd=/bzroot

    ...

     

     

    Save the file, stop the array and reboot unRAID.  Doing this on my Haswell machine caused it to use the acpi-cpufreq scaling driver instead of the intel_pstate one.  It scales the frequency down like a rockstar now!  Usually keeps it around 800MHz - 1000MHz during idle now.

     

    On the flip side, my other test machine, a year older Intel cpu (i5-3470) was able to scale down to 1600MHz (minimum) pretty consistently when using the intel_pstate driver... but when I disabled intel_pstate then there wasn't a scaling driver available.  For some reason the acpi-cpufreq driver wasn't compatible with this cpu.  Your mileage may very.

     

    Give this a try and let me know if it helps you.  Either way, if it helped or not, let me know which cpu you tried with this command:

    grep -m 1 'model name' < /proc/cpuinfo

     

    This seems to be working! Thanks for the help!

    grep -m 1 'model name' < /proc/cpuinfo
    

    model name      : Intel® Core i3-4130T CPU @ 2.90GHz

     

  12. Give this a try and let me know if it helps you.  Either way, if it helped or not, let me know which cpu you tried with this command:

     

    I am running in plain unRAID mode and changing the driver does work for me; using an Intel Core i3-4130T processor.

     

    It used to stay at max frequency all the time (I could see it drop to lower frequencies for very short periods), now it nicely scales down as it used to do.

     

    I also have an i3-4130T so I figure this fix will solve the issue for me as well. Will report back later when I can try this out.

     

    Is the driver issue something which can/will be fixed in the next update? I don't mind implementing this quick fix, but I'd love it if I didn't have to.

  13. Hmmm

     

    Having upgraded to b12 running

    awk '/^cpu MHz/ {print $4*1" MHz"}' /proc/cpuinfo

    or

    cat /proc/cpuinfo |egrep -i mhz

    boths shows all of my cpus pretty much pegged at 3400 MHz.

     

    Which tallies with what Dynamix gui is showing. Dynamix is also showing cpu utilization in the low single digits (currently hovering between 1 and 4%).

     

    Doesn't look like my cpus are stepping down. Am booted with the default KVM option and

    cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
    

    shows powersave for all cores.

     

    I would like to figure this out as well.  But if I'm not mistaken I read what eschultz wrote a bit differently, and I believe that

    awk '/^cpu MHz/ {print $4*1" MHz"}' /proc/cpuinfo

    is how the webgui is pulling the information (meaning it would also be wrong.)

     

    Assuming that's what eschultz meant I've yet to see the a right way to check the cpu info under KVM so if anyone could provide that code I'd be greatful.

×
×
  • Create New...