bnevets27

Members
  • Posts

    576
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

bnevets27's Achievements

Enthusiast

Enthusiast (6/14)

26

Reputation

  1. Ah ok yeah that makes sense. 999 it is then, thanks!
  2. Dumb question? So I ran the command to start 3 plotting tasks, one in each window of tmux. Great, all worked, got 3 plots. I expected it to start a new plot after finishing, it apparently does not. So I would have to run the command again. Obviously it doesn't make sense to manually start a plotting task, so how does one make sure a new plot starts after the last one finishes?
  3. My thinking was this without any real idea how it works. 1) Parity 2 info gets written to (overwrites) the data disk. - Result is Parity 2 and the data disk contain the same data now, the data disk of course was removed from the array so therefore its emulated. At this point there is still dual parity, and parity 2 actually exists on 2 disks (the original parity 2 and the new party 2 being the "old" data disk). 2) The data disk (which is no longer a data disk and is now a clone of parity 2) is then assigned as the parity 2 disk. I assume this is where you would "trust parity". So still dual parity, data disk being emulated. 3) Finally, add the disk to the "failed"/emulated data slot. Rebuild the data. So in this scenario, dual parity is maintained. I just wasn't sure if this was possible.
  4. Ah ok. I'm on the same page now. The initial questions was basically just that. Can a parity swap be done with dual parity, on the second parity disk? If it was a known/tested procedure then I would say it was probably be the safer of the two options, from what I understand. But as you say without knowing, then the first suggestion of just going back to single parity during the disk swap makes sense. They are currently healthy but they are getting old and I have had the odd one drop out. And with a bunch of testing seem fine and had been re-added and look to be fine. But yes important stuff is backed up but prefer not to loose anything regardless of course. Just a bit of piece of mind thing. I was having an issue with the server where it was randomly having unclean shutdowns so I was a be leary of doing anything. Finally figured out it was the power supply. Would run for weeks, heavy load, light load no problem, then just shut off. Long story short I'll take your advice and go back to single parity. I wish I knew of the full procedure at the beginning of all this though. This all started with a failed data drive. I had wanted to upgrade the parity 2 with the disk that was replacing the failed drive but wasn't sure if the parity swap worked the same way on dual parity (I have done it in the past on single parity), and being a bit concerned, waiting for a response and not fully understanding how to do it with dual parity, I just replaced the failed disk to get the array healthy again. I have a backup server that has dual parity I could maybe try this out on in the future, if no one else has given it a try. Thanks for the help and reassurance trurl
  5. So remove both drives at the same time? As I don't see how else to get the parity 2 drive into the data location. Sorry if I'm seeming to be a bit dense as I've never removed/failed 2 drives at the same time before. Or are you saying. Turn dual parity off, making parity 2 free. Move parity 2 to the data location. Rebuild the data. Then turn dual parity back on and add the "old" data disk to the parity 2 location and rebuild parity. So essentially, making my system a single parity system, freeing up the second parity. Then setting it back to dual parity and building the parity 2, parity data. I don't care about the array being offline I want the method with the highest level of protection for a disk failure. I was hoping to retrain dual parity like in the parity swap procedure where the parity data is duplicated. Yes, both disks are the same size.
  6. Hoping I could get some clarification before proceeding. I can't see any reference to a second parity drive is in this wiki: https://wiki.unraid.net/The_parity_swap_procedure And that procedure mentions using 3 disks, while I only have 2 I'm working with. Recap: Parity 2 Drive Data Drive Both healthy and in service but I would like to swap their location. Data drive becomes parity 2 and parity 2 becomes data drive.
  7. I agree and disagree. I personally currently have more ram then I know what to do with so I could care less how much ram unraid uses. BUT, I'm also looking at changing that in the future and I do also help with builds for others with different requirements and I am looking to do a straight stripped down, lowest specs just NAS type build. So I do see the need to be conservative also. At the same time minimum specs builds are getting to have more memory then they used to, but that is still evolving. I've even noticed now on an old machine older version of unraid that were lighter run better then the more recent builds. Maybe outside of the scope of this thread but ever since unraid has become more feature rich there has been this "battle" between raw minimalistic NAS and full feature rich NAS. I think it we might be getting to the point where 2 versions would be better. A "light" and a full. That would be the simplest, that would probably cover most of the two different groups. You could also go down the customize your install route and get to choose what you want included before your image is built. That would be great for more advanced or particular users. Of course this requires more work and development. That doesn't come for free. Not sure how well known this is so it's good to be brought up. It is definitely helpful. I've been using mc forever for managing my files server side/locally. Also a tip, if you're doing large transfer use screen so the transfer will continue if for some reason you loose connection to the server or your machine you are running mc from goes to sleep. But I will also say it's a bit of a pain/slow. It would be much quicker (and easier for a large group of people) to just click on an icon in the web ui, copy and paste (or drag and drop) and be done. With mc you still have to open a terminal and type mc and navigate to the file/folder you want to move on pane and then do the same in the other pane, then hit the correct F key. It's really not the most arduous task but of course it could be much better/nicer/easier. Would all's be nice to have a bit more protection. Accidental deletion is definitely possible on a large scale. Though this is true for most why people manage their files.
  8. I'm trying to use this script to mount multiple shared drives. From everything I've read here I should just be able to run multiple version of this script. But when I do I get the following error: Failed to start remote control: start server failed: listen tcp 127.0.0.1:5572: bind: address already in use I have rebooted. I'm only mounting the mergerfs in one of the versions of the script. Whichever script I run first will work but I can't get a second to work due to the above error. Any clues?
  9. Thank you both @trurl and @itimpi . Just to confirm. I can pull my parity 2 drive and my data drive out, swap them and then do a parity copy (Step 13). During this procedure, I still have dual parity and I'll only really have one "failed" disk? So if I understand correctly. Unraid knows the data disk is actually the old parity 2 disk, it then copies the old parity 2 data to the new parity 2 disk (being the old data disk). Once done I then have parity 1 in tack as it hasn't been touched. Parity 2 (new) has been copied from parity 2 (old) and parity 2 (old) still has the parity 2 data on it. So essentially in this state there is 2 parity 2 disks that are clones of each other. At this point I now have 2 valid parity disks and 1 "disabled" (old parity 2 drive that has become a data drive) disk. Unraid then rebuilds the data onto the "new" data disk. So I maintain dual parity, with one device failure until the procedure finishes. So I would need to lose 2 other drives (3 in total) to have any data loss, correct? Though now looking over the guide again, I don't understand how to correctly remove 2 drives at the same time and unraid being able to track which is the old parity. Unless I'm overthinking it.
  10. That's kind of confusing advice as the "third drive" is the one with increasing reallocated. But it's not part of the array as I had said above. It's a spare drive sitting on the shelf. That I could temporarily introduce to the array to aid in this swap. Because as you said you would need a third drive to maintain parity. Though technically with 2 Parity drives I probably could maintain Parity without needing a third drive but I wouldn't be maintaining dual Parity during the procedure. I figured I could do that but wouldn't that be risky? At that point if any drive fails I don't think I would be protected. In that situation, Drive 22 would be unassigned/pulled/emulated. "failure 1" Parity 2 would be unassigned/pulled is now invalid. "failure 2" So in essence I've lost a drive (22) and Parity 2. I would assign them to the slots I want them in.It would then rebuild disk 22, then rebuild Parity 2, I assume? Or the opposite of that build Parity 2 then disk 22 like you said. Either way during that procedure, if anything fails it will be lost, correct? As that would be 3 "failed" drives. Only reason to bring the unhealthy disk into the picture was then this process would be possible: Removed disk 22 (red) and replace with unhealthy disk(blue) . Unraid rebuilds disk 22 and now the array is healthy. Remove Parity 2 (gold) and put in the disk I want there (which happens to be what used to be disk 22(red) ). Let unraid build Parity. Again now healthy Then remove disk 22 (blue) and put in what used to be Parity 2 (gold) into slot 22. Unraid rebuilds the drive. Array back healthy, disks where I want them and the unhealthy drive (blue) is not in the array. At least that way if the unhealthy drive dies I still have dual Parity during the process. With they only most risk being when rebuilding Parity 2. I was hoping to have the better level of protection without rebuilding 3 times. I guess I was just looking for if there was another way or if it is indeed possible to remove 2 drives from the array and swap them. As there would be 2 missing drives at the same time, with one being Parity. I also wasn't sure if I was to do "option 1" and just swap the disk that the procedure would actually be. As if unassign disk 22 and move it to parity 2 wouldn't unraid not start as it would misconfigured and there for would have to do new configuration. Of I do new configuration with 2 drives unassigned (or intentionally failed is another way to look at) won't that loose all of the data? Outside of the main question here but, would it not be technicality possible to do what I suggested? Just maybe not currently? Tell unraid that you plan to use disk 22 as Parity 2. Unraid makes a clone of Parity 2 onto disk 22. This would take disk 22 offline and it would have to be emulated. But the advantage is only 1 disk is "failed/offline" during this cloning. Stop the array. Assign the now cloned Parity 2 (disk 22) into Parity 2. And trust Parity. Then you could replace disk 22 with the drive that used to be used as Parity 2. And then let unraid rebuild the data. I can see that the array may not be able to be usable during the process but it would be safer.
  11. Basically the same reason and situation as the OP but I have dual Parity, don't think the OP did/does. I have a data disk that would be better used as a parity due to it being faster. I was interested in the procedure to make that swap/move. And in this situation there is no other available drive. Situation 2. Still looking for the same outcome but with a third drive that could be used temporarily during the swapping/moving. Unfortunately the only drive I have around isn't in great shape (increasing reallocated sectors) but it still functional and passed preclear and smart testes. It's not part of the array, it's just a spare drive on the shelf. The array is currently healthy. So the second situation is a separate question. What would the procedure be if I had a third drive that could be used during the process but I don't want it used in the array when the swapping/moving is all done and the data and Parity drives are where I want them. Simply put Parity 2 (red) becomes data disk 22 Data disk 22 (gold) becomes Parity 2 Reason I mention 2 different scenarios is I of course would prefer the one that keeps the array the most protected the longest.
  12. Hope the OP doesn't mind me hijacking this thread. But I have basically the same question/situation. I have a data drive I would like to swap to my Parity 2 location. Sounds like there isn't away to do this without at least loosing one Parity drive during the process. What would be the safest way to do this, without a spare drive? What would the process be with a temp spare drive? I have drive that isn't healthy so I would want it in the array for the shortest period of time. The procedure with the temp spare drive I think is simple enough but there would be 3 rebuild/Parity checks which is a quite a few. I know it's not a common procedure but would be nice if unraid could handle these situations a bit better. Thinking out loud but if unraid could copy (in my case) Parity 2 info onto the data disk I plan to move to the Parity 2 location. I'm aware that, that data drive would have to be taken offline and the contents of it would be emulated. Then when that process of copy Parity 2 to the data drive finishes, move the data drive to parity 2 and Parity would then still be valid. I could then take my "old" Parity 2 and put it in the data slot and have it rebuild. Unless this is somehow already possible?
  13. Ever since I started using unraid (2010, v4.7), the community is what has been the leader in adding enhancements/value/features to unraid. Initially unraid didn't do much more than being a NAS, which was exactly what it was designed to do. But the community was the ones who started to build plugins and guides on how to use them to add way more functionally then the core function of just a NAS. That's not to say limetech was lacking really. They did and do build a rock solid NAS and that was the motto for the longest time. But a lot of user, and definitely myself wanted to keep utilizing the hardware more and more, way beyond just a NAS. The community really thrived at that. I've been able to add much more functionality to my system, learn and use more and more programs thanks to unraid/limetech and the community. I've really enjoyed the community here, it has felt small and personable and while its growing it still has the small tight knit feeling. A place where people give great support and never makes anyone feel like they have asked a stupid question. Unraid is where I've learned a lot of things about linux dockers etc. Again thanks to the community. I'm sure I'm just stating the obvious here at this point. Not to say unraid is worthless without the community but it is what helps or maybe even the cause to why it thrives. On a somewhat selfish note, I have very much benefited from the work of all the community devs especially @CHBMB and @bass_rock with their work on the nvidia plugin. For the people that use it, it was a huge breath of fresh air for older systems running plex. And it is definitely not exclusive to older systems. I also used the DVB drivers that was built by them. Unraid/limetech really didn't seem interested, at the time in providing support for the nvidia drivers and @CHBMB and the @linuxserver.io team saw how much the "niche" (I honestly don't think it's that niche) of people really wanted and would really benefit from it. I was ecstatic when it as finally released. I'm not sure where the notion came that the "third party" kernel was unstable as @CHBMB and the @linuxserver.io team work really hard at releasing good dockers/plugins etc. With very good support and documentation. It's such a no brainer when going into CA Apps and seeing something maybe by @linuxserver.io, it's always what I'll install given the choice. I had mentioned this in the request thread for GPU drivers in the kernel to @limetech that having the ability for hardware acceleration in plex via nvidia cards would attract users. I'm sure it did and likely attracted users for other reasons once @CHBMB and @bass_rock got it working. Limetech didn't ever comment in that thread and from what I remember was not at all interested at the time in pursuing it, at least from what I could tell from public information. Yet @CHBMB and @bass_rock worked hard to make it happen and as I mentioned I'm sure that created revenue for limetech and I know made many people happy. I'm glad limetech has decided to build in the feature and support it at the company level. And really this should be a big win for both @limetech and @CHBMB. For limetech because it's a great added feature and for CHBMB to give him a break from having to support that plugin which was clearly a lot of work. Maybe even freeing up his time to work on other great things for the community. While I'm glad he is finally getting a break. I do hope he and any other developers that may have decided to leave come back, as I do really appreciate their work. I also hope that this situation actually ends up being more positive then negative. It looks like limetech as learned more now, then they I'm sure already knew, how valuable their community developers are. And hopefully more communication will help build a stronger better relationship with the community and its developers, because the combination of the limetech team and the community developers has created a ever evolving fastastice product. Thank you to both the limetech team, the community and all the community developers.
  14. True but the successful backup that it completed is unfortunately not a backup of a full working system. And in this case basically blank. I was lucky I did a manual for copy at the start so it wasn't a complete loss. I usually also keep (by renaming the folder) a permanent backup that CA backup/restore creates once in a while copy in case I don't catch something that's gone awry within the deletion period, so I can go further back if necessary. But yes with a delete backups every 60 days and say a minimum of 2 backups kept it would definitely keep a good backup long enough. In that same right, keep every nth backup indefinitely would be nice too but I can understand not wanting to put in too many options. Thanks squid for the great app!