shEiD

Members
  • Content Count

    99
  • Joined

  • Last visited

Community Reputation

4 Neutral

About shEiD

  • Rank
    Newbie

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

993 profile views
  1. I've just installed and started using the File Integrity plugin today. I manually started a `Build` process on 7 (out of 28) drives in my array disk1 had least amount of files, so it has already finished But, the UI shows some nonsense 🤔 it shows disk1 as a circle, not green checkmark, even though it has just finished the build, and is up-to-date it shows disks 4, 5, 9 and 10 with a green checkmark, even though the builds are clearly still running and aren't finished it shows disks 7, 12, 17, 18, 19, 22, 23, 24, 26, 27 and 28 with a green checkmark, even
  2. Linux newb here, so... sorry for a probably silly question: I would like to install fd. But it seems nerdpack has a really old version of fd: ``` fd-6.2.0-x86_64-1_slonly.txz ``` 6.2.0 if from Jan 3, 2018 🤔 Current version is 8.2.1 Basically, how does this work in unraid? Do I need to ask here, in nerdpack thread for someone to "update" the included fd package?
  3. @olehj Thank You so much for this plugin. Awesome job 👍 A little feature request, maybe... It would be nice if `Comment` could be displayed more prominently - larger font size and bold.
  4. @bidmead Awesome looking annotations 🤩 What program are you using to do this?
  5. I just finished running Parity Check with `Write corrections to parity` and updated the parity. The log shows a list of corrections, like these: Jan 21 13:58:50 unGiga kernel: md: recovery thread: P corrected, sector=5455701888 Jan 21 13:58:50 unGiga kernel: md: recovery thread: P corrected, sector=5455701896 Jan 21 13:58:50 unGiga kernel: md: recovery thread: P corrected, sector=5455701904 Jan 21 13:58:50 unGiga kernel: md: recovery thread: P corrected, sector=5455701912 Jan 21 13:58:50 unGiga kernel: md: recovery thread: P corrected, sector=5455701920 I guess, that means
  6. IMHO, this is a wonderful suggestion. I would say `Parity Update` is the best option - short and intuitively understandable. Also, I would like to propose that Parity Check and Parity Update would be split into separate buttons in the UI. Furthermore, it would be nice if: Parity Update button would be disabled by default. As much as I understand, it is recommended to always run a Parity Check not and Update, at least according to wiki. Notice, how much easier it is to say Check and Update, instead of Check and Check without writing... yada...yada...yada... 😉 When
  7. I assume, that parity is OK, and the problem is borked files on the array, because the read errors where on the array drives... Or rather maybe I should ask - so there are situations, where parity goes out of sync for some reason, even without disk errors on the parity drives themselves? Is there some place with detailed documentation about this? Because the wiki has pretty much nothing useful on this topic. If I run the write-corrections-to-parity check after the read-only check (when parity errors where notated) - can unRAID use those "error notes" and be able to go and f
  8. I do not use questionable drives anymore - all of the drives in both of my servers are only WD Reds. But that's beside the point. Every single drive is gonna become questionable with time... hopefully... if it's not gonna straight up die with no warning 😄 That is exactly the situation here - these are older WD Reds, and it seems a couple of them started to show their age and need to be retired... I am actually happy, this happened on this testing server, and not on my main one. I am actually trying to learn how unRAID and it's parity works, and how to deal with this exact situation
  9. So I ran a non-correcting parity check again - the same number of 157 errors. 🤔 No disk read errors on this 2n run. 👍 But this time - TWO drives had their SMART Raw read error rate increased by 1 (15 to 16 and 2 to 3). 😠 Questions: So what do these 157 parity errors mean, actually? Actually, I am really confused - is the data wrong on the disk(s) that had those read/SMART errors, or is the parity incorrect? What do I do now? What can I do to make sure the files are recovered and correct? Or am I screwed, because I do not
  10. Switched to another slot and SMART tests again - no errors. Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Completed without error 00% 38078 - # 2 Short offline Completed without error 00% 38070 - # 3 Extended offline Completed without error 00% 38027 - # 4 Short offline Completed without error 00% 38020 - Now I'm really confused. This drive had read errors in 2 separate enclosures and slots. But now SMART tests
  11. Sorry, I forgot to write up my system. There are no cables to replace, tbh. I have all my array drives in a 16 bay SAS disk shelves. Inside the shelves backplanes are connected to the expanders via 4x SFF-8087 cables. And the shelves themselves are connected to the server's HBA via SFF-8088 cables. Another detail I forgot to mention, is that I actually swapped one of the shelves between my primary and secondary servers just last Sunday (6 days ago). So I am pretty sure that the cables and/or drive slot is not the culprit, as I had no problems with this shelf and a drive slot, when it was
  12. The SMART extended self-test - Completed without error 👍 What do I do next?
  13. Added the 1 and 200 SMART attributes to all drives. Ran the short SMART test - completed without error Now running the long SMART test... it'll take some time, I gather. Can I navigate away from the SMART page? I mean, will I loose the progress info, if I close the browser tab, and come back to this SMART info page later on, while the test is still running?
  14. First and foremost - I have been running unRAID for years - but never used parity, so I apologize for having no experience when it comes to parity stuff. I have added parity on my secondary ("testing") unRAID machine some time ago. I wanted to try it out, before using it on my main unRAID machine. Yesterday I noticed, that one of the disks reported 32 read errors. So today I decided to run a parity check. As per instructions in the wiki, I ran non-correcting parity check. It finished with 157 sync errors. At the same time, the disk with 32 errors, now had 4
  15. I finally finished moving out all the files from this disk using unBALANCE. I did not touch/move that directory, which gives disk errors. Interesting - the drive should be completely empty, as the only thing left is that borked folder, which shows up empty over network. But unRAID webUI shows drive is still using 321 GB, which is actually the size of that borked folder, ie: the size of all teh files inside that folder, that are now gone... 🤔 Does this mean, that unRAID somehow sees the actual files? Maybe a silly question - would it be ok for me to move this drive to anot