Jump to content

WeeboTech

Moderators
  • Posts

    9,457
  • Joined

  • Last visited

Everything posted by WeeboTech

  1. I'm not sure what's going on with the environment. rsync works fine for me, but yes it copies the file and removes the source before it's been verified, yet it's already been verified. i.e. moving from /mnt/disk2/temp to /mnt/disk2 shouldn't require it to be verified again. In any case, I thought it odd about mv not preserving extended attributes. I remember seeing that it did in source code. In my most recent test with mv extended attributes are preserved. root@unRAIDb:/mnt/disk2# pwd /mnt/disk2 declare -a MD5=$(md5sum folder.hash) setfattr -n user.hash -v ${MD5[0]} folder.hash root@unRAIDb:/mnt/disk2# getfattr -d folder.hash # file: folder.hash user.hash="65b69d37c3d3f8cccce56a6f4ac7d49a" mkdir tmp mv folder.hash tmp cd tmp root@unRAIDb:/mnt/disk2/tmp# pwd /mnt/disk2/tmp root@unRAIDb:/mnt/disk2/tmp# getfattr -d folder.hash # file: folder.hash user.hash="65b69d37c3d3f8cccce56a6f4ac7d49a" root@unRAIDb:/mnt/disk2/tmp# mkdir /mnt/disk1/tmp root@unRAIDb:/mnt/disk2/tmp# mv folder.hash /mnt/disk1/tmp root@unRAIDb:/mnt/disk2/tmp# getfattr -d /mnt/disk1/tmp/folder.hash getfattr: Removing leading '/' from absolute path names # file: mnt/disk1/tmp/folder.hash user.hash="65b69d37c3d3f8cccce56a6f4ac7d49a" I wouldn't use rsync to move from /mnt/disk2/temp to /mnt/disk2 however your statement said,'something along those lines'. After re-verifying, I would use mv. However when going disk to disk or disk to disk on another system I would use rsync. Usually what I do is rsync -avPX source rsync://host/path after that is done I do it again with the -rc instead of -a. rsync -rcvPX source rsync://host/path This does a checksum comparison second time around instead of mtime/size. After that I'll do a third one with the --remove-source-files rsync --remove-source-files -rcvPX source rsync://host/path You can probably eliminate the second step if you do the hash verification from bunker. This is just how I do it though. mv should be working for you. I would double check that the drives are mounted with extended attributes.
  2. I've suggested the ability to export files to an md5sums type folder.hash file per directory for the exact reason described. I believe the current export file can be run through sed & cut to create one but not on a per directory basis. i.e. it may traverse from a provided root, down the tree, but I'm not sure if it will use relative paths in the file without some work.
  3. The problem is in using mv and not rsync for the second restore. consider the following instead of move. rsync --remove-source-files -avPX source dest
  4. badblocks should already be available in /sbin. -rwxr-xr-x 1 root root 19256 2010-04-30 03:18 /sbin/badblocks*
  5. I rsync my flash periodically as there are some configuration files, plugins, extras, etc, etc. In addition it has the superblock which is the layout of the array itself. Keep in mind it might be out of date from when you've last made a change to the array, so you'll want to do the rsync/backup anytime you change the array.
  6. You don't have to have a second backup server. It can be a another disk that is mounted on demand in the script. rsync from one location to the other with the link-dest and a dated directory. It can be a usb mounted disk as well, but you'll want it to be a usb 3 or eSATA disk for speed. As far as internet access. Yes, you might have to set up ssh with a tunnel in order to encrypt the data and provide remote access to the other server. using ssh and port tunneling you can even proxy the webGui over the ssh tunnel, but that's a more involved process for another thread. Might be better off setting up a VPN.
  7. No, I've had the same concerns as you. However, these particular drives are probably not helium filled.
  8. Are you using the Joe L Preclear method or the badblocks multi pattern method for burn in?
  9. http://agapovahardware.en.ec21.com/Hard_Drive_Western_Digital_3Tb--6776427_6776446.html
  10. The Plot thickens. So far I'm leaning towards HGST. A few more various posts and we'll have a better picture.
  11. For comparison purposes. If someone could check some WD drive models for conveyance test support. === START OF INFORMATION SECTION === Device Model: HGST HDN726060ALE610 No Conveyance Self-test supported. === START OF INFORMATION SECTION === Device Model: ST6000DX000-1H217Z Conveyance Self-test supported.
  12. With rsync, files are usually compares by size & modification time. So silent corruption may not cause the files to be rsynced to the backup. There's another strategy with the -c option which does a checksum compare rather then size/time. That would surely propagate the silent corruption. Accidental deletion protection. In this case, for crucial files that need to be kept over a time period or 'deltas' there is the --link-dest option. This can be done locally disk to disk, or it can be done remotely if the remote server 'pulls' the files. With this option you can link the current destination directory to new name. All files recursively are linked from an old directory to a new directory before the rsync is executed. The rsync is then executed on the source to the new name. If there are any chances they are copied over. If there are no changes at all, the new directory looks like the old directory. This has the benefit of using 1x the space for the whole tree, then the space required for each file's change. I've used this to mirror source trees and carry deltas. It can be done daily, hourly, monthly whatever is choosen. You do this by managing the source directory name and destination directory name on the backup volume. This article has a good description of the process. http://goodcode.io/blog/easy-backups-using-rsync/
  13. Not sure why you think that. 24TB isn't really all that much these days ... on 4 6TB drives. A small 2nd UnRAID server with that much (or more capacity); and it takes ~ 5 minutes of "your time" to backup your entire collection [Clearly the copy would take a few days cross the network, but that doesn't require any action on your part except to initiate the copy.] He's talking about backing up onto actual DVD's. Would take forever. Beyond that, its actually cheaper to back it up onto other hard drives. Duah, thanks for the clarification!
  14. Not sure why you think that. 24TB isn't really all that much these days ... on 4 6TB drives. A small 2nd UnRAID server with that much (or more capacity); and it takes ~ 5 minutes of "your time" to backup your entire collection [Clearly the copy would take a few days cross the network, but that doesn't require any action on your part except to initiate the copy.] I tend to agree here. you can trigger an rsync over the "local" network and let it fly, check on it every day and just let it fly. You can calculate how long it takes with a best of case of around 90MB/s. With the right tuning of kernel and rsyncd.conf files along with the proper command line, this will be faster and easier then realized. The trick is to tune the kernel for maximum buffering on the writes and open the TCP buffer/windows with the sockopts settings. I've moved 4TB overnight with ease. With rsync you can stop and restart the rsync and it will only rsync the files that have changed.
  15. I'm sure there's a way to do it in the rc.S file, check if UNRAID exists and issue the chkdsk command. However we may want to consider doing it via a /proc/cmdline option. i.e. if FSCKUNRAID exists in /proc/cmdline, do the fsck on the UNRAID boot device. Another idea is to check for a FSCKUNRAID flag as /boot/FSCKUNRAID then umount the device, fsck it, then remount it and remove the flag. Are there tell-tale signs in /var/log/syslog that can be tested with grep for this condition. If so that could be used for an automatic unmount, fsck and remount. I'm just not sure how safe it would be to automate this rather then having a /proc/cmdline option from the syslinux.cfg.
  16. Vents, fans, cables in the way. Blow out everything. If that doesn't change things, you can do the badblocks test in readonly and watch the drive temps over the course of the read. Then make a determination. Garycase's statement regarding age and temperature rising makes sense to some degree. However I had Seagate 7200 RPM SCSI drives that used to run so hot they would burn you yet never failed. They lasted years like that until I retired them. Drives today aren't the same as the old days. I would say when you hear the drive singing (that high pitched whine) then it's time to consider upcoming replacement. Another idea would be to turn off spin down timers and let the drives sit spinning without activity to gauge cooling capability vs power vs bearing heat vs head motion. I.E. There are a few factors that come into play here.
  17. Are they all doing this? Or only some of the drives? As I noted above, when temps start to run hotter than they used to, that's a good indication of pending failure. Most of them. They're all around 5 years old. The younger ones are maybe 4. This is why I'm migrating everything over to new 4tb HGST. Preventative maintainence steps. The high Temps only exhibit themselves during the parity checks. During normal serving of movies to the players, the tremps are more reasonable. Could this be an issue with the machine's overall capacity to provide enough airflow? Perhaps a badblocks in read-only mode would provide a model of temperature of the course of a scan.
  18. Is there data to show this as fact? I've had 10,000 RPM RPM drives outlast 5400 RPM drives by many years. Each was running 24x7x365. While the 10,000 RPM drives were hot all the time, the 5400 RPM drives were not. Two identical drives, one spinning faster and the other slower, the one spinning faster should wear out faster. But I have no facts to back up the claim. Just seems rather common sensical. I do expect that 10,000RPM drives are engineered differently than the 5400RPM drives, and therefore "all things" are not equal! I think the only real measurable factor is heat and if each is within tolerance of rated levels, they last as long as each other. A 7200 RPM drive is designed to run at that speed so I would expect that to be engineered differently as well. Just as I think drives that are designed as NAS drives are engineered for specifics such as runtime, vibrations, heat, etc, etc. My point is, speed has not shown to be an issue in my usage if heat is managed well. I always purchase 7200 RPM drives if I know I'm going to bang on the machines and lower speed if they are mainly archival. In comparison, I've had many failures of low usage WD EACS/EADS greeen 1Tb drives vs only 1 failure of a 7200 RPM Seagate 1Tb drive and 1 failure of a Seagate 3TB 7200 RPM drive. EDIT: I should add, my experience doesn't really amount to a hill of beans. What matters is larger amount of data to gather proper statistics, which is why I asked.
  19. Is there data to show this as fact? I've had 10,000 RPM RPM drives outlast 5400 RPM drives by many years. Each was running 24x7x365. While the 10,000 RPM drives were hot all the time, the 5400 RPM drives were not.
  20. There's so many more important things for Eric to work on at the moment, this plugin is a quick stopgap to help get a few power tools installed for novices. If I remember correctly, this is temporary to aide those who do not want to drop to the command line to install the packages. I probably would not do the removepkg commands since this doesn't install them into /boot/extra anyway. Removing the plugin stops them from being installed at next reboot, that's probably enough for now.
  21. NerdPack wasn't my favorite naming, yet as long as it's easy to install and get the additional tools for novices I'll go with what works. Suggestions powertools commandpack slackextras slackpack any other ideas? a poll perhaps?
  22. For those unfamiliar with the kbd package, I suggested it so programs could be run on the console on a different tty. This is mostly useful if your machine is having difficulty and you have a monitor. In my case, I share one with the HTPC and switch between HDMI/VGA as needed. For me, there is no difficulty, yet my machine has a monitor so I utilize it. in my /boot/config/go file the last line is. openvt -c9 -v -s /usr/bin/less +F /var/log/syslog This opens less in follow mode (like tail) on the /var/log/syslog file on tty9. So when the machine boots up, it switches to tty9 and starts monitoring the /var/log/syslog file. It's a cheesy way to monitor the syslog temporarily on a physical screen. The caveat of this approach is it needs to be restarted every time the /var/log/syslog is rotated. I haven't done anything about that yet since it's been a proof of concept. I think it's mostly useful when your machine is on the bench or there is difficulty. There are better approaches like having rsyslog write directly to /dev/tty12 instead. Since chvt is also included, we can make a drop in config for rsyslog to write to /dev/tty12 and change to it. In the meantime use of less in this context allows forward/reverse paging and following mode on the syslog (if you choose to enable it)
  23. My attributes are configurable on the command line. They 'default' to user.hash.value= user.hash.time= if ( !opt.attrhashvaluename ) { sprintf(buf,"user.%s.value", opt.hashbase ? opt.hashbase : "hash" ); opt.attrhashvaluename = strdup(buf); } if ( !opt.attrhashtimename ) { sprintf(buf,"user.%s.time", opt.hashbase ? opt.hashbase : "hash" ); opt.attrhashtimename = strdup(buf); } If you use an external hasher, that hasher's basename is used in the attribute name such as --hash-exec '/bin/md5sum -b {}' user.md5sum.value user.md5sum.time However each can be overridden with --hash-name --time-name I use the epoch time in the time value, so it can be used in arithmetic or in date conversions with strftime
×
×
  • Create New...