t3

Members
  • Posts

    48
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

t3's Achievements

Rookie

Rookie (2/14)

0

Reputation

  1. I do (also) have stability problems (with any of the versions after 6.12.3)... the server is just getting unresponsive after some time (sometimes half an hour, sometimes a few hours; seems to happen sooner if the web interface is open on another machine). Does not longer respond to pings (no screen output due to GPU passthrough). Docker is disabled (always), have tried the Realtek driver plugin, but that did not help.... kvmhost-diagnostics-20240110-2319.zip
  2. if you are running the image from their page (which would be my 1st guess), then there is an upgrade option right in the settings, called "Home Assistant Operating System" (that's it already). the other option is "Home Assistant Core" which just means the main HA application, if you don't run their specialized image, you may want to try and make a backup of your settings, and use theirs, from here: https://www.home-assistant.io/installation/linux hope it helps...
  3. orphan post stub i just noticed that there was an update already ~ a month ago, that apparently did fail on my machine, thus my previous question regarding an update is now obsolete...
  4. Not an advice, just some thoughts - since I'm currently (also) setting up a system dedicated to VMs only; I'm using an old Thumbdrive as only array drive, just to be able to fire up all the other Unraid stuff (KVM & Dockers of course). 2 NVMEs in a single BTRFS pool, since I like to have redundancy, and a BTRFS mirror seems to be more straight forward (and the Unraid array is not meant to provide trivial mirroring anyway). My concern ist the Fuse layer, which ist imo still in effect, as long as drives are somehow managed by Unraid (like with Pools). My guess: It might be best (in terms of performance) to just set KVM storage up as "unassigned device(s)". Maybe someone can shed light on how pooled drives are affected by Fuse when most (or any) IO is going to VM disk images...
  5. Am I right, that a rebuild happens below file system level? So, e.g. if Unraid is rebuilding a BTRFS drive, it could/would/should be possible to scrub that drive afterwards, to make 100% sure the rebuild did work as intended? Short story why I'd want to do/know that: I knocked over my (running) Tower LianLi ITX with 10 Bays https://www.picclickimg.com/d/l400/pict/173424939301_/RARE-Lian-Li PC-Q26A-Mini-ITX-PC-Case-with-four-additional.jpg Pretty nice small enclosure, perfect for Unraid, only thing is, the drive bay locking mechanism is not super tight. So, even though all drives were spun down at that moment, and I'm almost certain that nothing bad happened to any of them, two of them did get disconnected for a few seconds, until I checked the seating of all of the drives, and ... of course ... Unraid red-X-ed them immediately. Apparently the only way at that point is to let Unraid do a rebuild, to get them working again (I have two parity disks). Btw, the Wiki info seems to be outdated (I didn't see any "trust my array" option or whatever else). So. from hard facts, since my accident may still have affected the integrity of multiple drives (including parity), I can't be 100% sure, that the (now running) rebuild is restoring perfectly valid data... but, in case my above thought is correct, a BTRFS scrub should be the way to ensure that after the rebuild is done.... right? Some final thoughts: I love Unraid, until the moment something goes wrong. Had that a few times, and for every of that instances, I felt at no point really secure, to have found the right answer to that particular problem; Almost certainly it feels like having applied the wrong measure (like now, since a drive rebuild seems to be not the best thing in that particular case, if not the worst)...
  6. May I add, that Recycle Bin is most likely not the main culprit here (or any at all), just a symptom of an underlying problem with fuse. This same problem happens to me regularly for ~two years now, at completely "random" times, but usually when severe activity from various clients happen (Unraid is still a file server, after all). A couple months ago, it seems like I had identified Folder Caching as a possible reason, but even after disabling it, the problem still occurs, though less frequent...
  7. t3

    Checksum Suite

    didn't expect that. the plugin would still do that, on a file-per-file basis. but: if you have both drives, and there was no write-access after the rebuild, i guess there are some other (faster) ways to compare two drives; afaik they should be identical on a byte-by-byte basis, directly after a rebuild...
  8. t3

    Checksum Suite

    _after_ a rebuild, the plugin will allow you to validate the rebuilt files to have the same content, like when the hashes were created. in case they do, this implicitly also means, that the rebuilt went well so far. with one exception (as there is always one): the rather unlikely case, that the hash file itself was corrupted in such a way it had flipped one ascii character into another (since the hash is saved as ascii text). afaik there is no validation for the hash files themselves. ps: i guess you didn't literally mean to to compare a rebuilt drive with the original one...
  9. Worked for me too! Using some "no-name" 4GB stick & W7x64 (drive name was the generic "removable device")
  10. thanks a ton for that - just in time (for me) & works perfectly! a note for users with tight FW policies: this requires outgoing connections from the unraid box on tcp port 5223 (XMPPS), but that's already all it needs
  11. yep, ok, thanks... scratch the first part - i should have read the integrated help, where it is stated that read errors are indeed backed by parity i'm now going to find my way through the replacement procedure(s)...
  12. i did move some 300gb files off the array over night, und when i came back the next day, one of the disks (holding most of them) was offline; there were 7000+ errors printed in the main tab stats, and the syslog shows lots of sector read errors (and a few write errors). ok, disk is dead or dying - might be. but what does it mean for the files that were copied off that particular drive? apparently, the drive did not go offline on the first occurrence of a read error but after ~7000 of them, means some files were definitely read only using parity info, others apparently were not... so, were they corrected? by disk mechanisms? by additional parity reads? or are they now broken?!? would be good, i guess, if unraid hints users about if they need to be scared about the error count in the gui oh, and btw, i think it would be really, really good to have a simple gui-guided procedure to replace (or remove) failed disks in the most ideal way, reducing risk of data loss to a minimum. since this happens rather seldom, it simply means you are always (again) untrained, but it still touches one of the most important parts of unraid - data safety. so it's always again scaring to read through more or less outdated wiki articles and posts showing 10+ steps to follow, and to decide how to proceed on at this point - and i guess that happens to everybody in such a case...
  13. great! btw, what do you think of the btrfs issue? is this something to expect in such a case? i must admit i didn't expect it...
  14. OH YES i recently had exactly the same problem; i only discovered that one of the ssd cache mirror disks was offline for almost two months, when, after a power outage, where the the UPS power down didn't work btw, the system did some whatever repair on the mirror... which did then corrupt all vm disk images on the cache disk and the docker image as well. read on here; other user ~ same story: https://lime-technology.com/forum/index.php?topic=52601.msg505506#msg505506 speaking of btrfs' bad reputation: it seems that a btrfs mirror is no save place for docker/vm disk images! as far i can tell, the mirror only kept files intact that were written before or after one of the disks dropped out. any file that was changed in-place - like usually disk images - was corrupt after the mirror was reactivated even having 1+ backups for all and everything - having this situation going on unnoticed for such a long time is a bit on the edge btw, this means that unraid is already that stable and mature, that i don't feel any need to check the system every other day... so, yeah, a notification with a number of big red exclamation marks is very helpful!