Jump to content

RobJ

Members
  • Posts

    7,135
  • Joined

  • Last visited

  • Days Won

    4

Posts posted by RobJ

  1. 56 minutes ago, tunetyme said:
    2 hours ago, RobJ said:

    That's why my very first step is to recommend a parity check, so that you know there are no drive problems to take care of, and that parity is good.  There's no reason it should not stay good throughout.

     

    I think you missed my point...The fact that you have two different size drives and different format the question in my mind is how can parity be valid when I swap out a 2TB RFS drive with 4TB xfs drive. I know it works out now but at the time it was a strech to comprehend this.

     

    From a parity standpoint, size doesn't matter, format doesn't matter, data doesn't matter, nothing matters but the bits on every drive, whether you are using them or not.  From a parity standpoint, drives are all the same size, exactly as big as the parity drive.  They just have zeroes past the end of the physical drive.

     

    Here are links explaining parity (the second has more links):

       Parity-Protected Array, from the Manual

       How does parity work?

  2. 4 hours ago, itimpi said:

    You have left out the complications caused by dual parity as if using that any re-arrangements of the drives invalidates parity2.

     

    Ah, good point!  Can't have everything.  But where I'm running into this most is the wiki procedure for converting drives to XFS, and Parity2 is invalid any way, because of all the drive swapping and reassignment.

  3. 14 minutes ago, tunetyme said:

    I might add something at the end that would reassure the user if they don't click the check box.

     

    "If you forget to click the Parity already valid and start the system it will rebuild parity and it will take awhile but no harm is done."

     

    Done.

     

    18 minutes ago, tunetyme said:

    It is difficult for me as a user to have the confidence that everything on the new 4TB drive is identical to the old 2TB drive with a different format. So I wouldn't click that button unless I was absolutely sure that parity is valid. For me, if I could just verify that parity is correct one time then I could use this method with confidence.

     

    That's why my very first step is to recommend a parity check, so that you know there are no drive problems to take care of, and that parity is good.  There's no reason it should not stay good throughout.

     

    19 minutes ago, tunetyme said:

    Let me know if this is helpful.  As I have said I would be happy to try to help with the Wiki if you think this kind of perspective would be helpful.

     

    Keep it coming!   ;)

     

    I have also added a summary of the method to begin it with, and a new Method section, with the various factors that are involved and comparative verbiage between the different methods. The methods are only summarized.  Will it be helpful?  Probably not, so many more words added...

  4. 8 minutes ago, tunetyme said:

    The rsync command I used was

    rsync -avPX /mnt/disk3 /mnt/disk7

     

    That should be :  rsync -avPX /mnt/disk3/ /mnt/disk7

    Note the slash after the 3.  Without that slash, you will end up with a disk3 folder on Disk 7 (/mnt/disk7/disk3).  With the slash added, you will end up with the entire contents of Disk 3 on Disk 7, and no disk3 folder.

  5. tunetyme, would you mind checking the change I've made?  Here's the old Step 16:

    • You should see all array disks with a blue icon, a warning that the parity disk will be erased, and a check box for Parity is already valid; IMPORTANT! click the check box, make sure it's checked to indicate that Parity is already valid or your Parity disk will be rebuilt! then click the Start button to start the array; it should start up without issue and look almost identical to what it looked like before the swap, with no parity check needed; however the XFS disk is now online and its files are now being shared as they normally would; check it all if you are in doubt

    And here's the new version:

    • You should see all array disks with blue icons, and a warning (All data on the parity drive will be erased when array is started), and a check box for Parity is already valid. VERY IMPORTANT! Click the check box! Make sure that it's checked to indicate that Parity is already valid or your Parity disk will be rebuilt! Then click the Start button to start the array. It should start up without issue (and without erasing and rebuilding parity), and look almost identical to what it looked like before the swap, with no parity check needed. However the XFS disk is now online and its files are now being shared as they normally would. Check it all if you are in doubt.
      •  Before you click the Start button, you may still see the warning about the parity drive being erased, but if you have put a check mark in the checkbox for Parity is already valid, then the parity drive will NOT be erased or touched. It is considered to be already fully valid.
  6. Paul, I take back much of what I said - I never saw that comment by JonP, or any similar comments that there was *ANY* Ryzen support available yet, at all!  And I also was completely unaware that any Ryzen support had been added to 4.9.10.  ALL of the comments related to that seemed to be that they were waiting for 4.10 or 4.11.  I do apologize for that.

     

    But aren't there comments that LimeTech wasn't completely successful yet?  I take that to mean they aren't done making appropriate changes.  Also, have you seen any info on whether KVM/QEMU and related are updated for Ryzen yet?  That's fairly important I think.

     

    There is certainly a lot of interest in this thread, probably a lot more than anyone here realizes.  And Paul, while we can't possibly pay you for the investigative work you have done, it is invaluable, has been and will be very helpful!

  7. It wouldn't be applicable for a new system, so perhaps shouldn't even appear then.  But take a look at my newest suggestion, link below, makes the "parity is valid" checkbox almost obsolete, almost not needed any more.  It lets the system decide when parity is valid or not.  I think I'd still want it available, for the odd case where you reconstruct an array that used-to-be?  Like a lost super.dat?  Like a v4.7 array being converted, in a clean install?

     

       Allow drive reassignments without requiring New Config

  8. The new Retain feature, part of New Config, is a great thing, makes it much easier to rearrange drive assignments.  But it's still a fairly heavy task, that causes risk and confusion to certain new or non-technical users.  It used to be much easier in earlier versions, as in 6.0 and 6.1, you could stop the array, then swap drive assignments or move drive assignments to different drive numbers/slots, without having to do New Config or being warned that parity would be erased, just start the array up with valid parity, so long as exactly the same set of drives were assigned.  It would really make life easier, and less confusing for some users, if we could return to that mode, when safe to do so.

     

    An implementation suggestion to accomplish the above:  At start or when super.dat is loaded or at the stop of the array, collect all of the drive serial numbers (and model numbers too if desired), separated by line feeds, and save it as an unsorted blob (for P2).  Sort the blob (ordinary alphabetic sort) and save that as the sorted blob (for P1).  At any point thereafter, if there has been a New Config or there have been any drive assignment changes at all, before posting the warning about parity being erased, collect new blobs and compare with the saved ones.  If the sorted ones are different, then parity is not valid, erased warning should be displayed.  If the unsorted ones are different, then parity2 is no longer valid.  But if they are the same, then parity is still valid, and the messages and behavior can be adapted accordingly.  Sorting the blob puts them in a consistent order that removes the drive numbering and movement.  So long as it's exactly the same set of drives, the sorted blob will match, no matter how they have been moved around.

     

    The really nice effect is that users won't unnecessarily see the "parity will be erased" warning, or have to click the "parity is valid" checkbox.  The array will just start like normal, even if a New Config had been performed (so long as the blobs matched).  You *almost* don't need the "Parity is already valid" checkbox any more.

     

    The one complication to work around is that if they add a drive and unRAID clears it, or a Preclear signature is recognized on it, then the blobs won't match but parity is valid any way.  I *think* it's just a matter of where you place the test.  Or collect the blob without the new cleared disk, for comparison.

     

    Edit: need to add that ANY drive assignment changes invalidates Parity2.

    • Upvote 1
  9. After using New Config, the warning "All data on the parity drive will be erased when array is started" appears.  Nothing wrong with this normally, but if the user clicks the checkbox for "Parity is already valid", the warning remains.  This has caused enormous confusion to one or more new or non-technical users, who cannot see past this warning that their parity drive will still be erased.  Most of us know what is meant when we click the box to say it's valid, but all that new users see and understand is that parity is valid AND the parity drive will be erased!

     

    If the checkbox has a check, the warning should either be removed, or replaced with something like "All current parity data will be retained, not erased".

    • Upvote 1
  10. Harro and principis, you have posted in a very old thread about a very old version, Plex in v5, which was 32 bit, very different from what you are running.  At that time, they had never heard of Dockers or diagnostics or XFS.  Please post again in the support thread for the particular Plex you have installed.

  11. I'm sorry guys, but this all seems way too premature!  You're trying to get older tech to work with the newest tech object, without any compatibility updates specific to the new tech.  I would not expect JonP or anyone else to participate here until they had first added what they could, a Ryzen friendly kernel and kernel hardware support and Ryzen tweaked KVM and its related modules, and various system tweaks to optimize the Ryzen experience.  After that, then they can join you and participate.  It's like having an old version with bugs, and an update with fixes.  Why would a developer want to discuss problems with the old.  They are always going to want you to update first and test, then you can talk.

     

    There's so much great work in this thread, especially from Paul, but it's based on the old stuff, not on what you will be using, so it seems to me that much of the effort is wasted.  Patience!!!

    • Upvote 2
  12. 25 minutes ago, Squid said:

    ( @RobJ - FYI, FCP added output last month from mcelog if it's installed , and I'm impressed with the detail that it gives )

     

    I saw you had added it, was glad to see it, and yes, I've been very pleased with the reporting it gives you, most of the time.  I don't remember it, but I believe I've seen a report or 2 of other hardware subsystem MCE issues that were too cryptic for me, and couldn't find good advice online.  But mostly it's great.

     

    SDguy, I believe it's reporting that your ECC RAM detected and corrected a memory error, so this looks harmless.  But the fact it's having them, could possibly be related to your lockups in the past, a memory error that *couldn't* be corrected.  You may want to try the PassMark Memtest on your RAM (it has ECC RAM support).

  13. On 3/29/2017 at 9:35 AM, tunetyme said:

    BTW I think it is a significant flaw when you use new config that drive format goes to auto. Major source of frustration to go through and change everything back.

     

    This is a good feature request (not a flaw), and I agree with it.  It's not really a flaw, because New Config is basically resetting the array back to nothing.

    The new Retain feature is working well, modifies New Config to reset the array config but retain some or all of the previous assignments.  What we need now is for the Retain feature to also retain the existing file system info (ReiserFS, XFS, or BTRFS) for each drive that's retained, and not set them to Auto.  This would save steps for us, and avoid some risk and confusion.

  14. Not sure but you may have to run those commands within the docker container environment.  There a way to exec a shell within it, possibly in the Docker FAQ.  But probably better built into the container startup somewhere.  Try asking the elasticsearch container author to add it.

  15. I have seen one other case like this, and no, I can't explain it either.  The long test should have stopped on the first pending sector, and reported its LBA.  Sounds like the drive is mostly OK, and the sectors are readable.

     

    The only way to clear pending sectors is to write to them.  So what generally has to be done in unRAID is to either rebuild the drive onto itself (forcing a rewrite of every sector with what is already there), or to pull it and Preclear it then rebuild on it or a substitute, or run a non-destructive BadBlocks test on it (using the -n option I believe, which rewrites each sector onto itself).

×
×
  • Create New...