WeeboTech

Moderators
  • Posts

    9457
  • Joined

  • Last visited

Everything posted by WeeboTech

  1. I believe it's based on attached devices on start up. No matter how they are assigned.
  2. That drive is not usable in unRAID. it's a ticking timebomb for you to loose data. 5 Reallocated_Sector_Ct 0x0033 081 081 036 Pre-fail Always - 25328 187 Reported_Uncorrect 0x0032 001 001 000 Old_age Always - 3238 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 8 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 8 ... SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Completed: read failure 90% 31583 4028104768 # 2 Extended offline Completed: read failure 90% 31583 4028104768 What this drive is good for is testing unRAID's smart monitoring. heh. I would'n't put that even NEAR my production array.
  3. I'm good, I have my .c programs. I only wanted to add some ideas if you were looking to expand further. I think the corz compatibility is/was a great idea and it's something I've been proposing to the other authors. I'll probably borrow some the plugin code to see how I can do the same. I have millions of files with all sorts of file names. I need to do it in .c to avoid quoting issues. I learned that linking to the openssl libraries provides the fastest md5 implementation I could find. I wouldn't be surprised if PHP uses them. Along with a compiled walk through the file system, it's as fast as it can possibly be.
  4. I've actually been waiting for someone to ask about that. I actually was going to build in compatibility to read in the extended attributes to avoid having to rehash all of the existing files that had already been done with bunker, but looked at the number of total downloads of it (at the time just prior to publishing this plugin it was a whole 7 downloads) and decided that it wasn't worth the huge amount of debugging on it. (and because I'm directly dealing with user's data here, my debugging is rather extensive to make sure that there's no way I can inadvertently corrupt data -> to the point that every time I write a hash file the plugin checks to make sure the file name is correct and if its not correct immediately throws up an alert and completely stops the plugin from doing anything and everything until a reboot happens). I figured that if anyone ever brought it up and supplied me with a sample exported file, I'd just create a script to create the .hash files from it. It's probably not worth your effort. I have a tool, it still remains to have a few options added and then compiled for 64bit. I've been wasting so much time with the conversion to ESX6 and unRAID6 with the ESX usb reset problem that I have not finished. This is the help screen so far. root@unRAID:/mnt/disk1/home/rcotrone/src.slacky/hashtools-work# ./hashfattrexport --help Usage: %s [OPTION]... PATTERN [PATTERN]... Export hash extended file attributes on each FILE or DIR recursively PATTTERN is globbed by the shell, Directories are processed recursively Filter/Name selection and interpretation: Filter rules are processed like find -name using fnmatch -n, --name Filter by name (multiples allowed) -f, --filter Filter from file (One filter file only for now) -X, --one-file-system don't cross filesystem boundaries -l --maxdepth <levels> Descend at most <levels> of directories below command line -C --chdir <directory> chdir to this directory before operating -r --relative Attempt to build relative path from provided files/dirs A second -r uses realpath() which resolves to full path -S, --stats Print statistics -P, --progress Print statistic progress every <seconds>. -Z, --report-missing Print filenames missing an extended hash attribute -M, --report-modified Print filenames modified after extended hash attribute -0, --null Terminate filename lines with NULL instead of default \n -R, --report Report status OK,FAILED,MODIFIED,MISSING_XATTR,UPDATED -q, --quiet Quiet/Less output use multiple -q's to make quieter -v, --verbose increment verbosity -h, --help Help display -V, --version Print Version -d, --debug increment debug level it works like this root@unRAID:/mnt/disk1/home/rcotrone/src.slacky/hashtools-work# getfattr -d strlib.* # file: strlib.c user.hash.time="1419424415" user.hash.value="67eef48f1199c68381127baded05f051" # file: strlib.h user.hash.time="1419424415" user.hash.value="214635e26ea28ccb3cb18b9b3d484248" # file: strlib.o user.hash.time="1419424415" user.hash.value="1d258b90f70e787b5560897fcb125e1b" root@unRAID:/mnt/disk1/home/rcotrone/src.slacky/hashtools-work# ./hashfattr -r strlib.* 67eef48f1199c68381127baded05f051 strlib.c 214635e26ea28ccb3cb18b9b3d484248 strlib.h 1d258b90f70e787b5560897fcb125e1b strlib.o root@unRAID:/mnt/disk1/home/rcotrone/src.slacky/hashtools-work# ./hashfattr -r strlib.* | md5sum -c strlib.c: OK strlib.h: OK strlib.o: OK It's like doing a find down a tree | grep | some filter to convert the output of getfattr | md5sum -c What I have to perfect is writing an individual folder.hash per directory and/or doing the whole /mnt/somearchivefolder/hashed_directory_name.hash as previously mentioned.
  5. folder.par2 is where this plugin is going to rock and save the day for some people.
  6. The purpose of using the meta data/extended attribute is to store it with the file so it won't matter if you are on the disk or user share. no database needed. The user does not need to know. At that point exporting it a local direct folder.hash or a remote/alternate location of md5hashname.hash with the embedded path provides what is needed. A centrally/easily managed md5 from the attribute (that the user never need know about) An exported folder.hash in the within the directory or an alternate location with linkage back to the source. The downside of having the original folder.hash within the directory is when you have corruption. Building the md5hashname.hash to a central location and using a symlink safeguards the hash file and allows the symlink to exist in the folder (for corz, or export). both bitrot and bunker have export formats, but not one that is compatible with corz. I can do this fairly easily, however I don't really have the time do it, or it would have been done by now. LOL! Maybe I'll get adventurous this week. I'll still end up using my gdbmsum program since it's side effect lets me log all changes, cache the directories and update the hash in place.
  7. and s'more food for thought on storing the path in the folder.hash file. This grabs the md5 into a bash array so it can be utilized further. root@unRAID:/mnt/disk3/filedb# declare -a MD5=( $(md5sum disk3.md5sum.gdbm) ) root@unRAID:/mnt/disk3/filedb# echo $MD5 16fea9e414edd86ef4c63356951d4378 This turns the full path into a unique hash value. root@unRAID:/mnt/disk3/filedb# declare -a MD5PATH=( $(echo -e $PWD\c | md5sum) ) root@unRAID:/mnt/disk3/filedb# echo $MD5PATH 177b1c5dba856f67850883f7b265fe9b root@unRAID:/mnt/disk3/filedb# echo "# path: $PWD " > /tmp/${MD5PATH[0]}.hash root@unRAID:/mnt/disk3/filedb# cat /tmp/${MD5PATH[0]}.hash # path: /mnt/disk3/filedb root@unRAID:/mnt/disk3/filedb# set -x root@unRAID:/mnt/disk3/filedb# cat /tmp/${MD5PATH[0]}.hash + cat /tmp/177b1c5dba856f67850883f7b265fe9b.hash # path: /mnt/disk3/filedb This is used as an example to stuff the hash into a extended attribute. it's only purpose is to provide food for thought and an example of the export. root@unRAID:/mnt/disk3/filedb# setfattr -n user.hash -v $MD5 disk3.md5sum.gdbm root@unRAID:/mnt/disk3/filedb# getfattr -d disk3.md5sum.gdbm # file: disk3.md5sum.gdbm user.hash="16fea9e414edd86ef4c63356951d4378" Example of the hash named md5 folder.hash value.With path stored inside it. root@unRAID:/mnt/disk3/filedb# cat /tmp/177b1c5dba856f67850883f7b265fe9b.hash # path: /mnt/disk3/filedb 16fea9e414edd86ef4c63356951d4378 disk3.md5sum.gdbm root@unRAID:/mnt/disk3/filedb# pwd /mnt/disk3/filedb root@unRAID:/mnt/disk3/filedb# md5sum -c /tmp/177b1c5dba856f67850883f7b265fe9b.hash disk3.md5sum.gdbm: OK
  8. But thinking further about storing hashes in a separate folder introduces another problem. At that point, you are pretty much stuck with using absolute paths within the hash files (eg: /mnt/user/movieshare/movieA/moviefile.mkv) So now you can't easily verify the files if / when you copy them to a removable device if you share it with a buddy. You can't move the file from one folder to another without having to recalculate the hash files. You can't easily (but not impossible) do disk verifications, etc. and to present more food for thought... another idea for keeping the folder.hash file out of the directory... If you create a md5 of the full path of the folder.hash file, it can be used as filename stored in some directory database. Then in the source directory, create a symlink to the md5 named folder.hash. ie. /mnt/disk3/movies/somemovie/folder.hash -> /mnt/cache/filedb/786f8e4beaa1bbb0577ae0cd3638ecd6.hash You can even store the path as comment inside the 786f8e4beaa1bbb0577ae0cd3638ecd6.hash if need be. Unfortunately, this still doesn't get by changing the mtime of the /mnt/disk3/movies/somemovie at least once. But it does let you store the folder.hash elsewhere in case of corruption and copy the folder.hash within the directory when backing up. Downside would be... allot of files in one large directory unless you prefixed it somehow by sharename or disk name. I originally thought of this idea as people had stated they did not want a bunch of files littered around their file system. My future goal is to do this with a folder.par2 for verification and/or reconstruction of a corrupt file. Folder.hash is great to know something is wrong, but you'll have to go to a backup. with a folder.par2, you can detect and fix small errors in place.
  9. But thinking further about storing hashes in a separate folder introduces another problem. At that point, you are pretty much stuck with using absolute paths within the hash files (eg: /mnt/user/movieshare/movieA/moviefile.mkv) So now you can't easily verify the files if / when you copy them to a removable device if you share it with a buddy. You can't move the file from one folder to another without having to recalculate the hash files. You can't easily (but not impossible) do disk verifications, etc. Just some food for thought. An idea might be to store the hash and hash/verify time into the extended attributes like bunker and bitrot, then export them into the current folder.hash or wherever the user chooses. or potentially use a central location. I'm using a .c program to scan the whole filesystem and store data into a .gdbm. I do this because it's extremely fast once the file system is cached. It double duties as a dircache clone. (and a file list caching utility). I scan the file system with my gdbmsum keeping stat blocks in ram, and matching against the stat block stored in the gdbm sum. For most purposes, mtime and size are all that's needed. I store the whole binary stat block because I'm lazy and a binary match of stat struct to stat struct is fast. When the stat block changes, I calculate the md5 since the chances are the file is already in the buffer cache. (I'm still debating on doing it immediately, wait until mtime > some age, doing it batch overnight and/or double check if file is open). From there I can now export the data as one large md5sum file or pipe it to grep and/or md5sum -c What I plan to add is the storage into the extended attributes and then have some other tool to traverse the directories and pull out the data to create the folder.hash files. In my particular case, I'm using the GBDM format as it's extremely fast. I can traverse a filesystem containing over 250,000 files in 2 seconds with lookups of each file into the GDBM. i.e. 276469 items processed in 2 seconds, 256582 fetches, 256582 existing, 0 non-existing, 0 stores, 0 deletes, 0 errors sync completed in 0 seconds operation completed in 2 seconds This allows me to dir/inode cache the filesystem and also cache the stat block in the GDBM file for changes. Then append the md5 inline or via some other batch operation. Another example 131990 items processed in 1 seconds, 125690 fetches, 125690 existing, 0 non-existing, 0 stores, 0 deletes, 0 errors 134716 items processed in 1 seconds, 128316 fetches, 128316 existing, 0 non-existing, 0 stores, 0 deletes, 0 errors /mnt/disk3/Music/music.mp3/Chill/Various Artists/Taste of Summer Del Mar (Cool and Smooth Chill and Lounge Sounds - Deluxe Selection for Easy Listening and Relax)/11 Electric Rock Reef - Summer Prayer (A B C - Elements and Shadows Mix).mp3 /mnt/disk3/Music/music.mp3/Chill/Various Artists/Taste of Summer Del Mar (Cool and Smooth Chill and Lounge Sounds - Deluxe Selection for Easy Listening and Relax)/05 ZigZag to Paradies - Speed Down (Remixed in Budapest Hotel Cut).mp3 /mnt/disk3/Music/music.mp3/Chill/Various Artists/Taste of Summer Del Mar (Cool and Smooth Chill and Lounge Sounds - Deluxe Selection for Easy Listening and Relax)/13 Lover Banks - Hideaway (Cassette Sunset Ibiza Retro Mix).mp3 /mnt/disk3/Music/music.mp3/Chill/Various Artists/Taste of Summer Del Mar (Cool and Smooth Chill and Lounge Sounds - Deluxe Selection for Easy Listening and Relax)/04 Unchained Bars - Sleepless Eyes (Beyond the Beach Mix).mp3 /mnt/disk3/Music/music.mp3/Chill/Various Artists/Taste of Summer Del Mar (Cool and Smooth Chill and Lounge Sounds - Deluxe Selection for Easy Listening and Relax)/07 Bingo Bus - Passenger of a Dream (Sky Full Of Dance Mix).mp3 /mnt/disk3/Music/music.mp3/Chill/Various Artists/Taste of Summer Del Mar (Cool and Smooth Chill and Lounge Sounds - Deluxe Selection for Easy Listening and Relax)/09 Non-Stop Listening - Nature Is Calling (Sexy Summer Session Cut).mp3 /mnt/disk3/Music/music.mp3/Chill/Various Artists/Taste of Summer Del Mar (Cool and Smooth Chill and Lounge Sounds - Deluxe Selection for Easy Listening and Relax)/01 Chimichanka - Whisper of the Ocean (Never Felt So Good Cut).mp3 /mnt/disk3/Music/music.mp3/Chill/Various Artists/Taste of Summer Del Mar (Cool and Smooth Chill and Lounge Sounds - Deluxe Selection for Easy Listening and Relax)/02 Backyard Players - Light the Darkness (Dance to the Limit Version).mp3 /mnt/disk3/Music/music.mp3/Chill/Various Artists/Taste of Summer Del Mar (Cool and Smooth Chill and Lounge Sounds - Deluxe Selection for Easy Listening and Relax)/03 Ocean View Suite - Down by the Pier (Sigma Sound Cut).mp3 /mnt/disk3/Music/music.mp3/Chill/Various Artists/Taste of Summer Del Mar (Cool and Smooth Chill and Lounge Sounds - Deluxe Selection for Easy Listening and Relax)/14 Timejumpers - Follow Rivers (Breeze of Chill Mix).mp3 /mnt/disk3/Music/music.mp3/Chill/Various Artists/Taste of Summer Del Mar (Cool and Smooth Chill and Lounge Sounds - Deluxe Selection for Easy Listening and Relax)/folder.jpg /mnt/disk3/Music/music.mp3/Chill/Various Artists/Taste of Summer Del Mar (Cool and Smooth Chill and Lounge Sounds - Deluxe Selection for Easy Listening and Relax)/16 Mazed Emotions - Everybody Can Be Free (Turn This Beat Around Cut).mp3 /mnt/disk3/Music/music.mp3/Chill/Various Artists/Taste of Summer Del Mar (Cool and Smooth Chill and Lounge Sounds - Deluxe Selection for Easy Listening and Relax)/12 Moderate Jungle - The Sun (Shatter Me at Midnight Mix).mp3 /mnt/disk3/Music/music.mp3/Chill/Various Artists/Taste of Summer Del Mar (Cool and Smooth Chill and Lounge Sounds - Deluxe Selection for Easy Listening and Relax)/15 Glitch and Wet - Salt on My Skin (Jealous No More Cut).mp3 /mnt/disk3/Music/music.mp3/Chill/Various Artists/Taste of Summer Del Mar (Cool and Smooth Chill and Lounge Sounds - Deluxe Selection for Easy Listening and Relax)/06 Waterfront Lounge - Stones Against Water (Ibiza Mix).mp3 /mnt/disk3/Music/music.mp3/Chill/Various Artists/Taste of Summer Del Mar (Cool and Smooth Chill and Lounge Sounds - Deluxe Selection for Easy Listening and Relax)/08 Cliffside Chiller - Sunny Beach Days (Beat Drops Out Edit).mp3 /mnt/disk3/Music/music.mp3/Chill/Various Artists/Taste of Summer Del Mar (Cool and Smooth Chill and Lounge Sounds - Deluxe Selection for Easy Listening and Relax)/10 Ambient Therapy - Hear a Whisper (Boys Relaxing At the Disco Cut).mp3 137044 items processed in 3 seconds, 130561 fetches, 130544 existing, 0 non-existing, 17 stores, 0 deletes, 0 errors .... 276487 items processed in 3 seconds, 256599 fetches, 256582 existing, 0 non-existing, 17 stores, 0 deletes, 0 errors sync completed in 1 seconds operation completed in 4 seconds I'm not suggesting any changes, just presenting some food for thought. It might be worthwhile to consider a central DB/GDBM, then export a folder.hash files after processing. The downside of gdbm is concurrency, Only 1 writer is allowed. Multiple readers are allowed. Which is why I had been working on an SQLite variant. But that comes at the cost of speed and size. For me, my needs are much larger. I have so many files I need to export a file list nightly in case I want to search for something. For one of my disks it takes over an hour just to walk the file tree of 750,000 files The gdbm can be exported easily into a centralized filelist. for 250,000 files the overhead is - rw-rw-r-- 1 root root 106398151 2015-11-04 17:08 /mnt/disk3/filedb/disk3.md5sum.gdbm with 133 bytes + stat struct + time() as a record. exporting this file format is very fast as in. root@unRAID:/home/rcotrone/src.slacky/gdbmsum-work# time ./gdbmsum /mnt/disk3/filedb/disk3.md5sum.gdbm |wc -l 256620 real 0m0.855s user 0m0.650s sys 0m0.310s # time ./gdbmsum /mnt/disk3/filedb/disk3.md5sum.gdbm | grep 'Taste of Summer Del Mar (Cool and Smooth Chill and Lounge Sounds' | md5sum -c /mnt/disk3/Music/music.mp3/Chill/Various Artists/Taste of Summer Del Mar (Cool and Smooth Chill and Lounge Sounds - Deluxe Selection for Easy Listening and Relax)/02 Backyard Players - Light the Darkness (Dance to the Limit Version).mp3: OK /mnt/disk3/Music/music.mp3/Chill/Various Artists/Taste of Summer Del Mar (Cool and Smooth Chill and Lounge Sounds - Deluxe Selection for Easy Listening and Relax)/folder.jpg: OK /mnt/disk3/Music/music.mp3/Chill/Various Artists/Taste of Summer Del Mar (Cool and Smooth Chill and Lounge Sounds - Deluxe Selection for Easy Listening and Relax)/05 ZigZag to Paradies - Speed Down (Remixed in Budapest Hotel Cut).mp3: OK /mnt/disk3/Music/music.mp3/Chill/Various Artists/Taste of Summer Del Mar (Cool and Smooth Chill and Lounge Sounds - Deluxe Selection for Easy Listening and Relax)/10 Ambient Therapy - Hear a Whisper (Boys Relaxing At the Disco Cut).mp3: OK /mnt/disk3/Music/music.mp3/Chill/Various Artists/Taste of Summer Del Mar (Cool and Smooth Chill and Lounge Sounds - Deluxe Selection for Easy Listening and Relax)/07 Bingo Bus - Passenger of a Dream (Sky Full Of Dance Mix).mp3: OK /mnt/disk3/Music/music.mp3/Chill/Various Artists/Taste of Summer Del Mar (Cool and Smooth Chill and Lounge Sounds - Deluxe Selection for Easy Listening and Relax)/14 Timejumpers - Follow Rivers (Breeze of Chill Mix).mp3: OK /mnt/disk3/Music/music.mp3/Chill/Various Artists/Taste of Summer Del Mar (Cool and Smooth Chill and Lounge Sounds - Deluxe Selection for Easy Listening and Relax)/09 Non-Stop Listening - Nature Is Calling (Sexy Summer Session Cut).mp3: OK /mnt/disk3/Music/music.mp3/Chill/Various Artists/Taste of Summer Del Mar (Cool and Smooth Chill and Lounge Sounds - Deluxe Selection for Easy Listening and Relax)/13 Lover Banks - Hideaway (Cassette Sunset Ibiza Retro Mix).mp3: OK /mnt/disk3/Music/music.mp3/Chill/Various Artists/Taste of Summer Del Mar (Cool and Smooth Chill and Lounge Sounds - Deluxe Selection for Easy Listening and Relax)/04 Unchained Bars - Sleepless Eyes (Beyond the Beach Mix).mp3: OK /mnt/disk3/Music/music.mp3/Chill/Various Artists/Taste of Summer Del Mar (Cool and Smooth Chill and Lounge Sounds - Deluxe Selection for Easy Listening and Relax)/06 Waterfront Lounge - Stones Against Water (Ibiza Mix).mp3: OK /mnt/disk3/Music/music.mp3/Chill/Various Artists/Taste of Summer Del Mar (Cool and Smooth Chill and Lounge Sounds - Deluxe Selection for Easy Listening and Relax)/01 Chimichanka - Whisper of the Ocean (Never Felt So Good Cut).mp3: OK /mnt/disk3/Music/music.mp3/Chill/Various Artists/Taste of Summer Del Mar (Cool and Smooth Chill and Lounge Sounds - Deluxe Selection for Easy Listening and Relax)/08 Cliffside Chiller - Sunny Beach Days (Beat Drops Out Edit).mp3: OK /mnt/disk3/Music/music.mp3/Chill/Various Artists/Taste of Summer Del Mar (Cool and Smooth Chill and Lounge Sounds - Deluxe Selection for Easy Listening and Relax)/03 Ocean View Suite - Down by the Pier (Sigma Sound Cut).mp3: OK /mnt/disk3/Music/music.mp3/Chill/Various Artists/Taste of Summer Del Mar (Cool and Smooth Chill and Lounge Sounds - Deluxe Selection for Easy Listening and Relax)/11 Electric Rock Reef - Summer Prayer (A B C - Elements and Shadows Mix).mp3: OK /mnt/disk3/Music/music.mp3/Chill/Various Artists/Taste of Summer Del Mar (Cool and Smooth Chill and Lounge Sounds - Deluxe Selection for Easy Listening and Relax)/16 Mazed Emotions - Everybody Can Be Free (Turn This Beat Around Cut).mp3: OK /mnt/disk3/Music/music.mp3/Chill/Various Artists/Taste of Summer Del Mar (Cool and Smooth Chill and Lounge Sounds - Deluxe Selection for Easy Listening and Relax)/15 Glitch and Wet - Salt on My Skin (Jealous No More Cut).mp3: OK /mnt/disk3/Music/music.mp3/Chill/Various Artists/Taste of Summer Del Mar (Cool and Smooth Chill and Lounge Sounds - Deluxe Selection for Easy Listening and Relax)/12 Moderate Jungle - The Sun (Shatter Me at Midnight Mix).mp3: OK real 0m1.386s user 0m2.080s sys 0m0.250s Just some ideas you may want to explore.
  10. What model and size did you have? How old was it? I can tell that my USB is constantly being accessed, however I have no proof that it's being written to.
  11. Very cool idea! Thanks, I don't really see the need to be forced to always run full checksums. Myself, I'd rather run them a bit at a time every week or so over the course of 6 months or a year than once a year, and I think that running a full verification once a month is a bit overkill even for the most paranoid among us. If files are moved or a drive has been rebuilt, someone will probably want to do a full verification. Personally, I intend to do a full verification per drive for files not verified within a certain number of weeks, but only as much as can be done within a 6 hour window or until a certain clock time. i.e. start at midnight and verify as much as can be done until 6am when I start working on the server. When saying oldest files first.. is that based on file modification time or the last verification time?
  12. Don't put 7200rpm drives in the sans digital. The rosewill may be better as there are metal holders which should help absorb and dissipate heat. The rosewill comes with a silicon image controller which works very well with port multipliers.
  13. unRAID will support port multiplier if the card/sata interface does and the driver does. Silicone image chipsets are amoung the fastest. The Asmedia works as well, but it's not as fast with simultaneous access. The sans digital case works. I have a few. but it can run a bit hot. I might 'try' the rosewill case since it can support 5 drives instead of 4 plus it comes with the silicone image controller.
  14. I boot from .vmdk. It makes no difference if using plop or .vmdk.
  15. Looking more carefully at your screenshot you average speed is not that bad, 929GB in 2:44H is about 95MB/s average, if there was nothing using the array the momentary slowdown can be a disk getting some slow sectors. I also experience slowdowns during parity checks. I just checked performance of my disks and one disk shows lower average speed and the graph shows low performance in the first 2 TBs of the disk. see attachment. Can this somewhat troublesome disk cause bad speeds? I would suggest attaching a smart report for disk 11. Something is going on with that disk and/or the its interface. (i.e., its controller or cabling.) all my disks are in cse m35t drive cages. I already relocated this disk in another slot by swapping disks. So it is using another cable and channel on the controller. It is still showing the same result. smart report is attached. The short test is not sufficient enough. SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed without error 00% 24865 - The drive spin down timer needs to be disabled temporarily and a long/extended test of the whole surface needs to be executed. 96 Reallocated_Event_Count 0x0032 100 100 000 Old_age Always - 0 197 Current_Pending_Sector 0x0022 100 100 000 Old_age Always - 0 5 Reallocated_Sector_Ct 0x0033 100 100 005 Pre-fail Always - 0 Nothing else seems to stand out. Another choice might be to do a badblocks test in readonly mode of the whole drive. Which may or may not trigger any events for weak sectors. However, badblocks and kernel reads are retried, so the smart long/extended test will reveal a problem earlier. The extended test will take approximately 8~9 hours to finish as estimated by the smart recommended polling time. Extended self-test routine recommended polling time: ( 492) minutes.
  16. I think conceptually it is N(array width)-2 reads + 2 writes(data & parity) With a 4 drive array, it's still 4 IO operations but 2 are reads that are in parallel while 2 are writes that are in parallel and cached. You are not reading and writing to the same drives, thus skip the rotational delay.
  17. Thanks for filling in the info WeeboTech. How does one enable "turbo write" and what is meant by "small width array" and "wide array?" small-width 2-6 data drives. wide > 6 data drives. Turbo-write enable/disable. I'm including a cron job for my main unraid file sever that is accessed all day long. in /etc/cron.d/md_write_method I have the following entries. 30 08 * * * [ -e /proc/mdcmd ] && echo 'set md_write_method 1' >> /proc/mdcmd 30 23 * * * [ -e /proc/mdcmd ] && echo 'set md_write_method 0' >> /proc/mdcmd # # * * * * * <command to be executed> # | | | | | # | | | | | # | | | | +---- Day of the Week (range: 1-7, 1 standing for Monday) # | | | +------ Month of the Year (range: 1-12) # | | +-------- Day of the Month (range: 1-31) # | +---------- Hour (range: 0-23) # +------------ Minute (range: 0-59) This turns on turbo-write at 8:30 when I start working, and off at 11:30 when I'm usually done. There is no webGui or other automated function right now. For unRAID 6, the path is /usr/local/sbin/mdcmd ON /usr/local/sbin/mdcmd set md_write_method 1 OFF /usr/local/sbin/mdcmd set md_write_method 0 There is a diminishing return on investment when reading/writing multiple drives and turbo-write. Reads on other drives can interfere with writes to the target drive. However with minimal reads and a large write load, the turbo-write can speed things up with the smaller arrays. I don't have an array larger then 6 drives to test it on, but with the smaller arrays I get good speed.
  18. OK I see your point. On my system it takes around 20 seconds (after the drives spin up) to figure it all out before it starts actually checking the disk. I get it. We do things differently. I exhibit much more control in the layout of my data utilizing user shares for read only. I never have a directory with multiple files span multiple disks. I always keep like files self contained within a directory. With all my split points, they are only directories and they rarely have files that require a checksum. With this software, if a user access a disk share via windows and uses corz to verify a hash file, is it possible the hash file will contain references to files on other disks?
  19. While I have user shares, I almost always access and organize with disk shares. My own gdbm managed md5sum DB files are all based on the disks. my sqlocate sqlite tables are also based on disks as they could migrate into and out of the array as well. Ideally when a disk goes bad and you question it's integrity, you'll be operating at the disk level. i.e. a failed disks/when replacing/rebuilding or migrating to other file systems. With hash files existing on a per directory level it does actually provide a valid check mechanism per disk. I have not used the Checksum Creator/Verify tool, but being able to validate a disk with a sweep of hash files contained only on that disk would be valuable after an event.
  20. Looking more carefully at your screenshot you average speed is not that bad, 929GB in 2:44H is about 95MB/s average, if there was nothing using the array the momentary slowdown can be a disk getting some slow sectors. I also experience slowdowns during parity checks. I just checked performance of my disks and one disk shows lower average speed and the graph shows low performance in the first 2 TBs of the disk. see attachment. Can this somewhat troublesome disk cause bad speeds? I would suggest attaching a smart report for disk 11. Something is going on with that disk and/or the its interface. (i.e., its controller or cabling.) Turn off the spindown timer for that disk and issue a smart long/extended test. Then review the report. If there are any pending sectors, the retries might slow things down for a short period. However they may be other issues as well.
  21. Points missed are if you have a large drive with many small files. The time difference to walk a huge drive with many small files can be measurable. In addition, during bulk writes, if turbo write is enabled, A faster parity drive helps since it's in write mode vs read/write. On a small width array with the other data drives in being read in parallel, writes can occur much faster during bulk loads. This may not be as useful with wide arrays, yet on small width arrays, it's useful and very noticeable. A cache negates this, yet in all this time, I've not required a cache since turbo write is so fast.
  22. You may need to run in safe mode for a while to see what might be causing the situation. I saw this in the diagnostic syslog. Oct 18 18:41:50 InfoSphere kernel: device vnet0 entered promiscuous mode Oct 18 18:41:50 InfoSphere kernel: VM: port 2(vnet0) entered listening state Oct 18 18:41:50 InfoSphere kernel: VM: port 2(vnet0) entered listening state Oct 18 18:41:54 InfoSphere kernel: kvm: zapping shadow pages for mmio generation wraparound Oct 18 18:42:05 InfoSphere kernel: VM: port 2(vnet0) entered learning state Oct 18 18:42:09 InfoSphere kernel: ------------[ cut here ]------------ Oct 18 18:42:09 InfoSphere kernel: WARNING: CPU: 0 PID: 8344 at arch/x86/kernel/cpu/perf_event_intel_ds.c:315 reserve_ds_buffers+0x10e/0x347() Oct 18 18:42:09 InfoSphere kernel: alloc_bts_buffer: BTS buffer allocation failure Oct 18 18:42:09 InfoSphere kernel: Modules linked in: kvm_intel kvm vhost_net vhost macvtap macvlan md_mod xt_CHECKSUM iptable_mangle ipt_REJECT nf_reject_ipv4 ebtable_filter ebtables tun xt_nat veth ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_nat_ipv4 iptable_filter ip_tables nf_nat hid_logitech_hidpp i2c_i801 r8169 mii ahci hid_logitech_dj libahci [last unloaded: md_mod] Oct 18 18:42:09 InfoSphere kernel: CPU: 0 PID: 8344 Comm: qemu-system-x86 Not tainted 4.1.7-unRAID #3 Oct 18 18:42:09 InfoSphere kernel: Hardware name: BIOSTAR Group NM70I-1037U/NM70I-1037U, BIOS 4.6.5 06/05/2013 Oct 18 18:42:09 InfoSphere kernel: 0000000000000009 ffff880008b9f858 ffffffff815eff9a 0000000000000000 Oct 18 18:42:09 InfoSphere kernel: ffff880008b9f8a8 ffff880008b9f898 ffffffff810477cb ffff880008b9f888 Oct 18 18:42:09 InfoSphere kernel: ffffffff8101fe63 0000000000000000 0000000000000000 0000000000010e10 Oct 18 18:42:09 InfoSphere kernel: Call Trace: Oct 18 18:42:09 InfoSphere kernel: [<ffffffff815eff9a>] dump_stack+0x4c/0x6e Oct 18 18:42:09 InfoSphere kernel: [<ffffffff810477cb>] warn_slowpath_common+0x97/0xb1 Oct 18 18:42:09 InfoSphere kernel: [<ffffffff8101fe63>] ? reserve_ds_buffers+0x10e/0x347 Oct 18 18:42:09 InfoSphere kernel: [<ffffffff81047826>] warn_slowpath_fmt+0x41/0x43 Oct 18 18:42:09 InfoSphere kernel: [<ffffffff8101fe63>] reserve_ds_buffers+0x10e/0x347 Oct 18 18:42:09 InfoSphere kernel: [<ffffffff8101ac34>] x86_reserve_hardware+0x141/0x153 Oct 18 18:42:09 InfoSphere kernel: [<ffffffff8101ac8a>] x86_pmu_event_init+0x44/0x240 Oct 18 18:42:09 InfoSphere kernel: [<ffffffff810a7ad4>] perf_try_init_event+0x42/0x74 Oct 18 18:42:09 InfoSphere kernel: [<ffffffff810ad190>] perf_init_event+0x9d/0xd4 Oct 18 18:42:09 InfoSphere kernel: [<ffffffff810ad54c>] perf_event_alloc+0x385/0x4f7 Oct 18 18:42:09 InfoSphere kernel: [<ffffffffa037c523>] ? stop_counter+0x2f/0x2f [kvm] Oct 18 18:42:09 InfoSphere kernel: [<ffffffff810ad6ec>] perf_event_create_kernel_counter+0x2e/0x12c Oct 18 18:42:09 InfoSphere kernel: [<ffffffffa037c63e>] reprogram_counter+0xc0/0x109 [kvm] Oct 18 18:42:09 InfoSphere kernel: [<ffffffffa037c709>] reprogram_fixed_counter+0x82/0x8d [kvm] Oct 18 18:42:09 InfoSphere kernel: [<ffffffffa037c8f1>] reprogram_idx+0x4a/0x4f [kvm] Oct 18 18:42:09 InfoSphere kernel: [<ffffffffa037cc53>] kvm_pmu_set_msr+0x16a/0x29b [kvm] Oct 18 18:42:09 InfoSphere kernel: [<ffffffffa036102a>] kvm_set_msr_common+0xa7d/0xd44 [kvm] Oct 18 18:42:09 InfoSphere kernel: [<ffffffffa03aa161>] ? vmx_set_rflags+0x34/0x36 [kvm_intel] Oct 18 18:42:09 InfoSphere kernel: [<ffffffffa035f916>] ? __kvm_set_rflags+0x45/0x4e [kvm] Oct 18 18:42:09 InfoSphere kernel: [<ffffffffa03b320e>] vmx_set_msr+0x1b2/0x189e [kvm_intel] Oct 18 18:42:09 InfoSphere kernel: [<ffffffffa035cd43>] kvm_set_msr+0x61/0x63 [kvm] Oct 18 18:42:09 InfoSphere kernel: [<ffffffffa03ac138>] handle_wrmsr+0x3b/0x64 [kvm_intel] Oct 18 18:42:09 InfoSphere kernel: [<ffffffffa03b1418>] vmx_handle_exit+0x84c/0x8ec [kvm_intel] Oct 18 18:42:09 InfoSphere kernel: [<ffffffff81081ea1>] ? rcu_note_context_switch+0x14a/0x167 Oct 18 18:42:09 InfoSphere kernel: [<ffffffffa03a92dc>] ? vmx_invpcid_supported+0x1b/0x1b [kvm_intel] Oct 18 18:42:09 InfoSphere kernel: [<ffffffffa03a92dc>] ? vmx_invpcid_supported+0x1b/0x1b [kvm_intel] Oct 18 18:42:09 InfoSphere kernel: [<ffffffffa0366a9e>] kvm_arch_vcpu_ioctl_run+0xcfc/0xeb0 [kvm] Oct 18 18:42:09 InfoSphere kernel: [<ffffffffa03ab773>] ? __vmx_load_host_state.part.53+0x125/0x12c [kvm_intel] Oct 18 18:42:09 InfoSphere kernel: [<ffffffffa036161a>] ? kvm_arch_vcpu_load+0x139/0x143 [kvm] Oct 18 18:42:09 InfoSphere kernel: [<ffffffffa0358fd0>] kvm_vcpu_ioctl+0x169/0x48f [kvm] Oct 18 18:42:09 InfoSphere kernel: [<ffffffff8110c046>] do_vfs_ioctl+0x367/0x421 Oct 18 18:42:09 InfoSphere kernel: [<ffffffff81113d33>] ? __fget+0x6c/0x78 Oct 18 18:42:09 InfoSphere kernel: [<ffffffff8110c139>] SyS_ioctl+0x39/0x64 Oct 18 18:42:09 InfoSphere kernel: [<ffffffff815f562e>] system_call_fastpath+0x12/0x71 Oct 18 18:42:09 InfoSphere kernel: ---[ end trace 550a31b84df716cb ]--- Oct 18 18:42:20 InfoSphere kernel: VM: topology change detected, propagating Oct 18 18:42:20 InfoSphere kernel: VM: port 2(vnet0) entered forwarding state Oct 18 18:50:52 InfoSphere kernel: mdcmd (244): spindown 1 Another point worth noting is filesystems are using reiserfs. If you've ever used any of the suspect beta's that have the reiserfs corruption, then it could be manifesting as another problem. Over the year or so we've seen people with strange issues when populating reiserfs filesystems after utilizing the suspect betas. I'm not saying this is the problem, but it's hard to detect when it is. I did not see any reiserfs specific issues in the logs, but I also don't know the history of any beta usage as well.
  23. I would have to agree here. How the array is used at an interactive level is how I determine drive choice. Batch operations can easily wait. All of my workstations and laptops have small 256GB SSDs. All of the big data and/or user/personal files is stored on the the unRAID server(s). I'm writing and updating all day long.