Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About t3

  • Rank
    Advanced Member


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. just happened to me (user folder unavailable): Feb 20 04:06:37 HostServer shfs: shfs: fuse.c:1387: unlink_node: Assertion `node->nlookup > 1' failed.#012=E#001 Feb 20 04:06:37 HostServer emhttpd: error: get_filesystem_status, 6453: Transport endpoint is not connected (107): scandir
  2. Worked for me too! Using some "no-name" 4GB stick & W7x64 (drive name was the generic "removable device")
  3. thanks a ton for that - just in time (for me) & works perfectly! a note for users with tight FW policies: this requires outgoing connections from the unraid box on tcp port 5223 (XMPPS), but that's already all it needs
  4. yep, ok, thanks... scratch the first part - i should have read the integrated help, where it is stated that read errors are indeed backed by parity i'm now going to find my way through the replacement procedure(s)...
  5. i did move some 300gb files off the array over night, und when i came back the next day, one of the disks (holding most of them) was offline; there were 7000+ errors printed in the main tab stats, and the syslog shows lots of sector read errors (and a few write errors). ok, disk is dead or dying - might be. but what does it mean for the files that were copied off that particular drive? apparently, the drive did not go offline on the first occurrence of a read error but after ~7000 of them, means some files were definitely read only using parity info, others apparently were not... so, were they corrected? by disk mechanisms? by additional parity reads? or are they now broken?!? would be good, i guess, if unraid hints users about if they need to be scared about the error count in the gui oh, and btw, i think it would be really, really good to have a simple gui-guided procedure to replace (or remove) failed disks in the most ideal way, reducing risk of data loss to a minimum. since this happens rather seldom, it simply means you are always (again) untrained, but it still touches one of the most important parts of unraid - data safety. so it's always again scaring to read through more or less outdated wiki articles and posts showing 10+ steps to follow, and to decide how to proceed on at this point - and i guess that happens to everybody in such a case...
  6. great! btw, what do you think of the btrfs issue? is this something to expect in such a case? i must admit i didn't expect it...
  7. OH YES i recently had exactly the same problem; i only discovered that one of the ssd cache mirror disks was offline for almost two months, when, after a power outage, where the the UPS power down didn't work btw, the system did some whatever repair on the mirror... which did then corrupt all vm disk images on the cache disk and the docker image as well. read on here; other user ~ same story: https://lime-technology.com/forum/index.php?topic=52601.msg505506#msg505506 speaking of btrfs' bad reputation: it seems that a btrfs mirror is no save place for docker/vm disk images! as far i can tell, the mirror only kept files intact that were written before or after one of the disks dropped out. any file that was changed in-place - like usually disk images - was corrupt after the mirror was reactivated even having 1+ backups for all and everything - having this situation going on unnoticed for such a long time is a bit on the edge btw, this means that unraid is already that stable and mature, that i don't feel any need to check the system every other day... so, yeah, a notification with a number of big red exclamation marks is very helpful!
  8. a quick heads up for... ASRock C2550D4I and ASRock E3C224D4I-14S - also working w/o probs.
  9. ... as far as my files were affected, i can confirm it works now; tested with sync triggered from "both" ends (web and local) and all were showing up correctly on either side - thanks a lot!
  10. thanks for hinting this; totally missed it, since there is no context menu/button/indication whatsoever that one can edit the names by simply double clicking them...
  11. oh, and btw, it would be great if it was possible to set up a meaningful name for the dropbox instance prior linking...
  12. i had some success "setting up" selective sync by copying a file from windows install over to the config folder of the docker; after installing and linking the docker, the dropbox config is created here: appdata/Dropbox/config/instance1 and contains a file named unlink.db since size and type and name suggested it might contain just everything that should not be synced, i just copied it from my windows box over to the docker config, and as far i can tell it works... of course only suitable if that is not going to change regularly. unfortunately there are problems with this docker with files that contain extended characters (i.e. german umlauts)... seems like the charset setting is missing or wrong?
  13. t3

    Checksum Suite

    well, no i can't say for sure about robocopy (not using it), bit what i see lacks the correct principle in first place. ok, tower2 is your master, and has all the proper *.hash files (the reference); you had also created them on tower1, which should be your mirror, but unfortunately they are useless, except you would write a script that can compare the contents of both towers directly from the hashes ('am not aware of such a tool). so, in order to use what you have... you would 1st delete all the hash files from tower1 (the mirror), and disable automatic hashing in the checksum tools setting (if you had set them up). now you can use robocopy to copy the hashes from tower2 to tower1, so the master file content and folder structure is 1:1 available on the slave, but just as sort of metadata. now you can either run corz over tower1, and it will find missing, different and extra files, as it is using the tower2 hash files as reference... but it will of course need to read all files and re-hash them during verification. another way would be to start a manual verification using the checksum tools in unraid. afaik (haven't done that) it will also report missing, different and extra*) files, since it is also using the hash files from the tower2 to verify tower1 contents, but of course this also needs to read everything and hash it. main advantage is that you don't need to have your client connected all the time, and most probably verification will run a lot faster when accessing the shares directly on the box. so, in short, there is no easy way to avoid on-the-fly hashing at least one tower, since there are no tools out there (of my knowledge), that will take (corz) hash files in folder structures to compare directories from two sources... *) note: i'm sure the plugin will report different and missing files during verification, i just don't know if it also reports files that are not hashed; maybe squid can tell...
  14. t3

    Checksum Suite

    afaik this won't work that way since corz has no comparison mode [for the hashes only]. if tower1 should be a 1:1 replica of tower2, you might copy the hashes files from tower2 to tower1 (i.e. using powershell to copy all *.hash files plus the full folder structure), and then run corz against tower1, to find out missing, different and extra files. there are probably other tools out there to synchronize/compare folders, but they won't make any use of the hash files corz/checksum tools had created, thus will need to re-hash everything on the run. antoher way would be to use syncthing (available as docker) to selectively synchronize folders; it allows to set a "master" (read-only) and uses sha256 to compare contents. even if it's definitely not meant to handle n terabytes and millions of files, i'm using it to create a mirror of my main server (sort of hot standby box)...
  15. ... thats the way any software goes btw, i wasn't completely precise - as far ipmitool can tell, the sensor in question simply doesn't support these particular "na" threshold values (means it also isn't possible to set them to something sane); i.e. true for most voltages on this board...