• Content Count

  • Joined

  • Last visited

Community Reputation

24 Good

About bnevets27

  • Rank


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Ah ok yeah that makes sense. 999 it is then, thanks!
  2. Dumb question? So I ran the command to start 3 plotting tasks, one in each window of tmux. Great, all worked, got 3 plots. I expected it to start a new plot after finishing, it apparently does not. So I would have to run the command again. Obviously it doesn't make sense to manually start a plotting task, so how does one make sure a new plot starts after the last one finishes?
  3. My thinking was this without any real idea how it works. 1) Parity 2 info gets written to (overwrites) the data disk. - Result is Parity 2 and the data disk contain the same data now, the data disk of course was removed from the array so therefore its emulated. At this point there is still dual parity, and parity 2 actually exists on 2 disks (the original parity 2 and the new party 2 being the "old" data disk). 2) The data disk (which is no longer a data disk and is now a clone of parity 2) is then assigned as the parity 2 disk. I assume this is where you would "trust parity". So
  4. Ah ok. I'm on the same page now. The initial questions was basically just that. Can a parity swap be done with dual parity, on the second parity disk? If it was a known/tested procedure then I would say it was probably be the safer of the two options, from what I understand. But as you say without knowing, then the first suggestion of just going back to single parity during the disk swap makes sense. They are currently healthy but they are getting old and I have had the odd one drop out. And with a bunch of testing seem fine and had been re-added and look to be fine. B
  5. So remove both drives at the same time? As I don't see how else to get the parity 2 drive into the data location. Sorry if I'm seeming to be a bit dense as I've never removed/failed 2 drives at the same time before. Or are you saying. Turn dual parity off, making parity 2 free. Move parity 2 to the data location. Rebuild the data. Then turn dual parity back on and add the "old" data disk to the parity 2 location and rebuild parity. So essentially, making my system a single parity system, freeing up the second parity. Then setting it back to dual parity and building the parity 2, p
  6. Hoping I could get some clarification before proceeding. I can't see any reference to a second parity drive is in this wiki: And that procedure mentions using 3 disks, while I only have 2 I'm working with. Recap: Parity 2 Drive Data Drive Both healthy and in service but I would like to swap their location. Data drive becomes parity 2 and parity 2 becomes data drive.
  7. I agree and disagree. I personally currently have more ram then I know what to do with so I could care less how much ram unraid uses. BUT, I'm also looking at changing that in the future and I do also help with builds for others with different requirements and I am looking to do a straight stripped down, lowest specs just NAS type build. So I do see the need to be conservative also. At the same time minimum specs builds are getting to have more memory then they used to, but that is still evolving. I've even noticed now on an old machine older version of unraid that were lighter run better the
  8. I'm trying to use this script to mount multiple shared drives. From everything I've read here I should just be able to run multiple version of this script. But when I do I get the following error: Failed to start remote control: start server failed: listen tcp bind: address already in use I have rebooted. I'm only mounting the mergerfs in one of the versions of the script. Whichever script I run first will work but I can't get a second to work due to the above error. Any clues?
  9. Thank you both @trurl and @itimpi . Just to confirm. I can pull my parity 2 drive and my data drive out, swap them and then do a parity copy (Step 13). During this procedure, I still have dual parity and I'll only really have one "failed" disk? So if I understand correctly. Unraid knows the data disk is actually the old parity 2 disk, it then copies the old parity 2 data to the new parity 2 disk (being the old data disk). Once done I then have parity 1 in tack as it hasn't been touched. Parity 2 (new) has been copied from parity 2 (old) and parity 2 (old) still has the parity 2 dat
  10. That's kind of confusing advice as the "third drive" is the one with increasing reallocated. But it's not part of the array as I had said above. It's a spare drive sitting on the shelf. That I could temporarily introduce to the array to aid in this swap. Because as you said you would need a third drive to maintain parity. Though technically with 2 Parity drives I probably could maintain Parity without needing a third drive but I wouldn't be maintaining dual Parity during the procedure. I figured I could do that but wouldn't that be risky? At that point if any drive fails I don't think I would
  11. Basically the same reason and situation as the OP but I have dual Parity, don't think the OP did/does. I have a data disk that would be better used as a parity due to it being faster. I was interested in the procedure to make that swap/move. And in this situation there is no other available drive. Situation 2. Still looking for the same outcome but with a third drive that could be used temporarily during the swapping/moving. Unfortunately the only drive I have around isn't in great shape (increasing reallocated sectors) but it still functional and passed preclear and smart testes. It's not
  12. Hope the OP doesn't mind me hijacking this thread. But I have basically the same question/situation. I have a data drive I would like to swap to my Parity 2 location. Sounds like there isn't away to do this without at least loosing one Parity drive during the process. What would be the safest way to do this, without a spare drive? What would the process be with a temp spare drive? I have drive that isn't healthy so I would want it in the array for the shortest period of time. The procedure with the temp spare drive I think is simple enough but there would be 3 rebuild/Parity checks which
  13. Ever since I started using unraid (2010, v4.7), the community is what has been the leader in adding enhancements/value/features to unraid. Initially unraid didn't do much more than being a NAS, which was exactly what it was designed to do. But the community was the ones who started to build plugins and guides on how to use them to add way more functionally then the core function of just a NAS. That's not to say limetech was lacking really. They did and do build a rock solid NAS and that was the motto for the longest time. But a lot of user, and definitely myself wanted to keep utilizing the ha
  14. True but the successful backup that it completed is unfortunately not a backup of a full working system. And in this case basically blank. I was lucky I did a manual for copy at the start so it wasn't a complete loss. I usually also keep (by renaming the folder) a permanent backup that CA backup/restore creates once in a while copy in case I don't catch something that's gone awry within the deletion period, so I can go further back if necessary. But yes with a delete backups every 60 days and say a minimum of 2 backups kept it would definitely keep a good backup long enough. In that same r