RobJ

Members
  • Posts

    7135
  • Joined

  • Last visited

  • Days Won

    4

Everything posted by RobJ

  1. I think you missed my point...The fact that you have two different size drives and different format the question in my mind is how can parity be valid when I swap out a 2TB RFS drive with 4TB xfs drive. I know it works out now but at the time it was a strech to comprehend this. From a parity standpoint, size doesn't matter, format doesn't matter, data doesn't matter, nothing matters but the bits on every drive, whether you are using them or not. From a parity standpoint, drives are all the same size, exactly as big as the parity drive. They just have zeroes past the end of the physical drive. Here are links explaining parity (the second has more links): Parity-Protected Array, from the Manual How does parity work?
  2. Not related, but you have IDE emulation turned on for your onboard SATA drives. When you next boot, go into the BIOS settings and look for the SATA mode, and change it to a native SATA mode, preferably AHCI if available, anything but IDE emulation mode. It should be slightly faster, and a little safer. You had your first segfault on March 16 at 2:05am, then again March 24 at 2:03am, then on March 27 around 7:30pm, then the last on March 30 at 2:05am. All were associated with Plex Media Scanner. A couple of them mention libjemalloc, but I'm not sure that matters because after the first segfault, the program may not have been running correctly, and that was on March 16. When these are reported, you should reboot when next convenient. Don't keep running just because it feels like nothing's wrong. The 2 main causes for segfaults are RAM faults and dependency issues, then well after those, program bugs. In this case, I think it's a dependency issue, and for that you'll need PhAzE to take a look, when he has time. These are rarely easy to figure out. There is a clear association with the time, a little after 2am, so you might look into what happens around then.
  3. Ah, good point! Can't have everything. But where I'm running into this most is the wiki procedure for converting drives to XFS, and Parity2 is invalid any way, because of all the drive swapping and reassignment.
  4. Done. That's why my very first step is to recommend a parity check, so that you know there are no drive problems to take care of, and that parity is good. There's no reason it should not stay good throughout. Keep it coming! I have also added a summary of the method to begin it with, and a new Method section, with the various factors that are involved and comparative verbiage between the different methods. The methods are only summarized. Will it be helpful? Probably not, so many more words added...
  5. That should be : rsync -avPX /mnt/disk3/ /mnt/disk7 Note the slash after the 3. Without that slash, you will end up with a disk3 folder on Disk 7 (/mnt/disk7/disk3). With the slash added, you will end up with the entire contents of Disk 3 on Disk 7, and no disk3 folder.
  6. tunetyme, would you mind checking the change I've made? Here's the old Step 16: You should see all array disks with a blue icon, a warning that the parity disk will be erased, and a check box for Parity is already valid; IMPORTANT! click the check box, make sure it's checked to indicate that Parity is already valid or your Parity disk will be rebuilt! then click the Start button to start the array; it should start up without issue and look almost identical to what it looked like before the swap, with no parity check needed; however the XFS disk is now online and its files are now being shared as they normally would; check it all if you are in doubt And here's the new version: You should see all array disks with blue icons, and a warning (All data on the parity drive will be erased when array is started), and a check box for Parity is already valid. VERY IMPORTANT! Click the check box! Make sure that it's checked to indicate that Parity is already valid or your Parity disk will be rebuilt! Then click the Start button to start the array. It should start up without issue (and without erasing and rebuilding parity), and look almost identical to what it looked like before the swap, with no parity check needed. However the XFS disk is now online and its files are now being shared as they normally would. Check it all if you are in doubt. Before you click the Start button, you may still see the warning about the parity drive being erased, but if you have put a check mark in the checkbox for Parity is already valid, then the parity drive will NOT be erased or touched. It is considered to be already fully valid.
  7. Howdy c3! We meet again! If that was all the paper boy wanted to do, what you presented above, there probably wouldn't be much of an issue. But the problem is that the paper boy doesn't want to just toss the paper, and he doesn't want to just come to the door and knock and seek permission. The paperboy wants to walk right in with a huge notebook and wander around the rooms. He looks in on Father, and writes down that he's looking at a replacement for the TV. He looks in on Mother and writes down she's checking on shoes. He checks on big sister, and notes she's reading up on STD's. He checks up on the son, and notes he's reading about bomb making (I threw that in for a curve ball). Then he leaves and walks into the neighbor's house and repeats. At the end of the day, he goes online and advertises that he has some huge notebooks for sale!
  8. Paul, I take back much of what I said - I never saw that comment by JonP, or any similar comments that there was *ANY* Ryzen support available yet, at all! And I also was completely unaware that any Ryzen support had been added to 4.9.10. ALL of the comments related to that seemed to be that they were waiting for 4.10 or 4.11. I do apologize for that. But aren't there comments that LimeTech wasn't completely successful yet? I take that to mean they aren't done making appropriate changes. Also, have you seen any info on whether KVM/QEMU and related are updated for Ryzen yet? That's fairly important I think. There is certainly a lot of interest in this thread, probably a lot more than anyone here realizes. And Paul, while we can't possibly pay you for the investigative work you have done, it is invaluable, has been and will be very helpful!
  9. There is also a new File Activity plugin, designed to help with this problem.
  10. All normal stuff. Sometimes, a motherboard BIOS update can remove a couple of them, but everyone has lines like those.
  11. It wouldn't be applicable for a new system, so perhaps shouldn't even appear then. But take a look at my newest suggestion, link below, makes the "parity is valid" checkbox almost obsolete, almost not needed any more. It lets the system decide when parity is valid or not. I think I'd still want it available, for the odd case where you reconstruct an array that used-to-be? Like a lost super.dat? Like a v4.7 array being converted, in a clean install? Allow drive reassignments without requiring New Config
  12. The new Retain feature, part of New Config, is a great thing, makes it much easier to rearrange drive assignments. But it's still a fairly heavy task, that causes risk and confusion to certain new or non-technical users. It used to be much easier in earlier versions, as in 6.0 and 6.1, you could stop the array, then swap drive assignments or move drive assignments to different drive numbers/slots, without having to do New Config or being warned that parity would be erased, just start the array up with valid parity, so long as exactly the same set of drives were assigned. It would really make life easier, and less confusing for some users, if we could return to that mode, when safe to do so. An implementation suggestion to accomplish the above: At start or when super.dat is loaded or at the stop of the array, collect all of the drive serial numbers (and model numbers too if desired), separated by line feeds, and save it as an unsorted blob (for P2). Sort the blob (ordinary alphabetic sort) and save that as the sorted blob (for P1). At any point thereafter, if there has been a New Config or there have been any drive assignment changes at all, before posting the warning about parity being erased, collect new blobs and compare with the saved ones. If the sorted ones are different, then parity is not valid, erased warning should be displayed. If the unsorted ones are different, then parity2 is no longer valid. But if they are the same, then parity is still valid, and the messages and behavior can be adapted accordingly. Sorting the blob puts them in a consistent order that removes the drive numbering and movement. So long as it's exactly the same set of drives, the sorted blob will match, no matter how they have been moved around. The really nice effect is that users won't unnecessarily see the "parity will be erased" warning, or have to click the "parity is valid" checkbox. The array will just start like normal, even if a New Config had been performed (so long as the blobs matched). You *almost* don't need the "Parity is already valid" checkbox any more. The one complication to work around is that if they add a drive and unRAID clears it, or a Preclear signature is recognized on it, then the blobs won't match but parity is valid any way. I *think* it's just a matter of where you place the test. Or collect the blob without the new cleared disk, for comparison. Edit: need to add that ANY drive assignment changes invalidates Parity2.
  13. After using New Config, the warning "All data on the parity drive will be erased when array is started" appears. Nothing wrong with this normally, but if the user clicks the checkbox for "Parity is already valid", the warning remains. This has caused enormous confusion to one or more new or non-technical users, who cannot see past this warning that their parity drive will still be erased. Most of us know what is meant when we click the box to say it's valid, but all that new users see and understand is that parity is valid AND the parity drive will be erased! If the checkbox has a check, the warning should either be removed, or replaced with something like "All current parity data will be retained, not erased".
  14. Harro and principis, you have posted in a very old thread about a very old version, Plex in v5, which was 32 bit, very different from what you are running. At that time, they had never heard of Dockers or diagnostics or XFS. Please post again in the support thread for the particular Plex you have installed.
  15. I'm sorry guys, but this all seems way too premature! You're trying to get older tech to work with the newest tech object, without any compatibility updates specific to the new tech. I would not expect JonP or anyone else to participate here until they had first added what they could, a Ryzen friendly kernel and kernel hardware support and Ryzen tweaked KVM and its related modules, and various system tweaks to optimize the Ryzen experience. After that, then they can join you and participate. It's like having an old version with bugs, and an update with fixes. Why would a developer want to discuss problems with the old. They are always going to want you to update first and test, then you can talk. There's so much great work in this thread, especially from Paul, but it's based on the old stuff, not on what you will be using, so it seems to me that much of the effort is wasted. Patience!!!
  16. I saw you had added it, was glad to see it, and yes, I've been very pleased with the reporting it gives you, most of the time. I don't remember it, but I believe I've seen a report or 2 of other hardware subsystem MCE issues that were too cryptic for me, and couldn't find good advice online. But mostly it's great. SDguy, I believe it's reporting that your ECC RAM detected and corrected a memory error, so this looks harmless. But the fact it's having them, could possibly be related to your lockups in the past, a memory error that *couldn't* be corrected. You may want to try the PassMark Memtest on your RAM (it has ECC RAM support).
  17. This is a good feature request (not a flaw), and I agree with it. It's not really a flaw, because New Config is basically resetting the array back to nothing. The new Retain feature is working well, modifies New Config to reset the array config but retain some or all of the previous assignments. What we need now is for the Retain feature to also retain the existing file system info (ReiserFS, XFS, or BTRFS) for each drive that's retained, and not set them to Auto. This would save steps for us, and avoid some risk and confusion.
  18. Thanks, that's good to know. Bad behavior, but apparently can be ignored!
  19. You'll want to ask in that thread, I didn't write it. But I would suppose it's meant to monitor file activity over a given period, so I would spin the drives down, let it run, wait until morning, and see if it indicates what may have caused any disks to be spun up.
  20. Not sure but you may have to run those commands within the docker container environment. There a way to exec a shell within it, possibly in the Docker FAQ. But probably better built into the container startup somewhere. Try asking the elasticsearch container author to add it.
  21. I have seen one other case like this, and no, I can't explain it either. The long test should have stopped on the first pending sector, and reported its LBA. Sounds like the drive is mostly OK, and the sectors are readable. The only way to clear pending sectors is to write to them. So what generally has to be done in unRAID is to either rebuild the drive onto itself (forcing a rewrite of every sector with what is already there), or to pull it and Preclear it then rebuild on it or a substitute, or run a non-destructive BadBlocks test on it (using the -n option I believe, which rewrites each sector onto itself).
  22. The File Activity plugin that rpowers mentioned is what was written to help users attempt to figure out what is spinning up their drives.
  23. tunetyme, you started off by saying that you followed the wiki step by step, but take a look again at step 16. Somehow you missed that one. I do apologize for the instructions seeming convoluted to you, but the step was there. I looked at each of the ways you were suggesting to do it, and I have to be honest, they not only will take twice or 3 times as long, they also seem more convoluted to me, when you add in all the little details needed. I still believe (and it's just my opinion!) that if you want the easiest and fastest way to do it, AND want to preserve parity and User Share configuration, then the wiki method is the best one. Obviously I need someone else to write it though! Except to prepare the initial disk, there is absolutely no clearing done, no Preclearing done, no parity builds done, and no file is copied more than ONE time ever. I'll add more words to step 16 to display the message you saw ("All data on the parity drive will be erased when array is started"), then tell you to ignore it and click the checkbox to indicate "Parity is already valid". Perhaps that will make it clearer? After the copying of a drive is done, it only takes a few minutes before you can start copying the next drive - stop the array, New Config with Retain:All, swap the drive assignments, correct the file system formats, optional start and stop of the array to check it, change the file format of the cleared drive to XFS, start the array and allow it to be formatted, and you're ready to start copying again. Here's a summary of the wiki method: - Steps 1 - 7 are just prep, figuring out a strategy, and preparing the initial drive. Plus, I recommend a parity check so you don't run into drive problems during the process, and because if parity is not good, there's no point in preserving it. - Steps 8 - 9 are copying, with optional additional verification. - Steps 10 - 18 are just the few minutes swapping the drives and formatting for the next copy - Step 19 just tells you to loop back to Step 8 to start copying again. At the end, I do tell you there are a few redundant steps in there, but I prefer having them in there because it seems safer that way. But overall, there's just 3 steps - prep, copy, swap, then repeat. But I really do welcome improvements and suggestions for simplification, or even full rewrites. I'd like to add a summary at the top of the various possible methods. I think if a user read the summary first, like the one above, they would be less likely to feel it's convoluted. Plus, if it suddenly did start to feel convoluted or wrong, then they would know they had gone off the track somewhere. There is a faster method, if you have a lot of data. Unassign the parity drive, turn off User Shares, and skip the swapping. That will make the copying faster (no parity drive), but you will still have to allow a day or 2 afterward to rebuild parity. And while you won't have to worry about the complications of messed up inclusions and exclusions and file duplication during the process, you will still have to locate where everything is afterward, and correct all of the inclusions. The advantage of the wiki method is the array always stays the same except for brief intervals (a few minutes each), both before you start, and during the process, and after you're done. The only difference is that each logical drive is now a different physical drive. Parity was always preserved, and so was your User Share configuration, and except for those brief intervals normal operation was fine. If you had a second parity drive, it would need to be rebuilt, but that is true for almost all methods. This would be a good feature request, and I agree with that. It's not really a flaw, as New Config is basically resetting the array back to nothing. So it's as if it has never seen the disks you may then assign to it, which is also why it MUST present the message that parity will be rebuilt when you use New Config. It assumes this is a new array, new disks it has never seen, and a new parity drive. The Retain feature is essentially brand new for us, but modifies the New Config to reset the array config but retain some or all of the previous assignments. What we need now is for the Retain feature to also retain the existing file system info for each drive. This would save steps for us, and avoid some risk and confusion. I'm sorry if I sound defensive about what I've written. I do welcome improvement. jonathanm has been pointing out one of the constraining elements of my method, and I want to comment on that, and other things, like the problems of unBALANCE, but in another post.