bnevets27

Members
  • Posts

    576
  • Joined

  • Last visited

Posts posted by bnevets27

  1. 4 minutes ago, burgess22 said:

    what ever value you put in for the value of -n it will queue so this command will use 4gb ram 2 threads and queue it once so change the -n to 999 if you like and it will run that command 999 times before stopping

     

    docker exec -it chia venv/bin/chia plots create -b 4000 -r 2 -n 1 -t /plotting/plot2 -d /plots

    Ah ok yeah that makes sense. 999 it is then, thanks!

  2. Dumb question? So I ran the command to start 3 plotting tasks, one in each window of tmux. Great, all worked, got 3 plots. I expected it to start a new plot after finishing, it apparently does not. So I would have to run the command again. Obviously it doesn't make sense to manually start a plotting task, so how does one make sure a new plot starts after the last one finishes?

  3. My thinking was this without any real idea how it works.


    1) Parity 2 info gets written to (overwrites) the data disk. - Result is Parity 2 and the data disk contain the same data now, the data disk of course was removed from the array so therefore its emulated. At this point there is still dual parity, and parity 2 actually exists on 2 disks (the original parity 2 and the new party 2 being the "old" data disk).

    2) The data disk (which is no longer a data disk and is now a clone of parity 2) is then assigned as the parity 2 disk. I assume this is where you would "trust parity".  So still dual parity, data disk being emulated. 

    3) Finally, add the disk to the "failed"/emulated data slot. Rebuild the data.

     

    So in this scenario, dual parity is maintained. I just wasn't sure if this was possible.

  4. Ah ok. I'm on the same page now. 

     

    The initial questions was basically just that. Can a parity swap be done with dual parity, on the second parity disk? If it was a known/tested procedure then I would say it was probably be the safer of the two options, from what I understand. But as you say without knowing, then the first suggestion of just going back to single parity during the disk swap makes sense.

     

    They are currently healthy but they are getting old and I have had the odd one drop out. And with a bunch of testing seem fine and had been re-added and look to be fine. But yes important stuff is backed up but prefer not to loose anything regardless of course. Just a bit of piece of mind thing. I was having an issue with the server where it was randomly having unclean shutdowns so I was a be leary of doing anything. Finally figured out it was the power supply. Would run for weeks, heavy load, light load no problem, then just shut off.

     

    Long story short I'll take your advice and go back to single parity. I wish I knew of the full procedure at the beginning of all this though. This all started with a failed data drive. I had wanted to upgrade the parity 2 with the disk that was replacing the failed drive but wasn't sure if the parity swap worked the same way on dual parity (I have done it in the past on single parity), and being a bit concerned, waiting for a response and not fully understanding how to do it with dual parity, I just replaced the failed disk to get the array healthy again.

     

    I have a backup server that has dual parity I could maybe try this out on in the future, if no one else has given it a try.

     

    Thanks for the help and reassurance trurl

  5. So remove both drives at the same time? As I don't see how else to get the parity 2 drive into the data location. Sorry if I'm seeming to be a bit dense as I've never removed/failed 2 drives at the same time before. 

     

    Or are you saying. Turn dual parity off, making parity 2 free. Move parity 2 to the data location. Rebuild the data. Then turn dual parity back on and add the "old" data disk to the parity 2 location and rebuild parity. So essentially, making my system a single parity system, freeing up the second parity. Then setting it back to dual parity and building the parity 2, parity data.

     

    I don't care about the array being offline I want the method with the highest level of protection for a disk failure. I was hoping to retrain dual parity like in the parity swap procedure where the parity data is duplicated.

     

    Yes, both disks are the same size. 

  6. I agree and disagree. I personally currently have more ram then I know what to do with so I could care less how much ram unraid uses. BUT, I'm also looking at changing that in the future and I do also help with builds for others with different requirements and I am looking to do a straight stripped down, lowest specs just NAS type build. So I do see the need to be conservative also. At the same time minimum specs builds are getting to have more memory then they used to, but that is still evolving.

    I've even noticed now on an old machine older version of unraid that were lighter run better then the more recent builds.

    Maybe outside of the scope of this thread but ever since unraid has become more feature rich there has been this "battle" between raw minimalistic NAS and full feature rich NAS. I think it we might be getting to the point where 2 versions would be better. A "light" and a full. That would be the simplest, that would probably cover most of the two different groups. You could also go down the customize your install route and get to choose what you want included before your image is built. That would be great for more advanced or particular users.

    Of course this requires more work and development. That doesn't come for free.

    Except that every feature added directly to the OS consumes RAM, whether you use it or not. Unraid isn't like pretty much any other OS that runs from a drive, everything is installed into fresh into RAM at boot time. That's why Unraid doesn't include drivers for everything normally supported in linux, because all that space in RAM would be wasted for 90% of users.
     
    Every feature that is included in the stock base of Unraid must be carefully evaluated whether or not the increase in RAM required to store the files used is worth it. It's kind of a double whammy, the files consume space on the RAM drive, whether or not you use them. When you run it, it uses RAM as well, but as you said, that part is optional, you don't have to run it.
     
    Features added using the docker container system DON'T take up RAM, as those reside on the regular storage drives, and only use RAM when run.
     
    As Unraid has added more features, the RAM requirements have increased significantly. As a NAS only, 1GB of RAM was plenty. Currently, even if somebody only needs the NAS functionality, the bare minimum is 4GB, and even that is too tight for some operations. That's because all of the added features take up RAM, even though they aren't used.
     
    As time goes on, and the normal amount of RAM in older systems that people want to repurpose as NAS devices increases, then adding features becomes less of an issue. Right now though, we still regularly see people trying to get Unraid to run on 2GB of RAM.
     

    Not sure how well known this is so it's good to be brought up. It is definitely helpful. I've been using mc forever for managing my files server side/locally. Also a tip, if you're doing large transfer use screen so the transfer will continue if for some reason you loose connection to the server or your machine you are running mc from goes to sleep.

    But I will also say it's a bit of a pain/slow. It would be much quicker (and easier for a large group of people) to just click on an icon in the web ui, copy and paste (or drag and drop) and be done.

    With mc you still have to open a terminal and type mc and navigate to the file/folder you want to move on pane and then do the same in the other pane, then hit the correct F key. It's really not the most arduous task but of course it could be much better/nicer/easier. Would all's be nice to have a bit more protection. Accidental deletion is definitely possible on a large scale. Though this is true for most why people manage their files.
    Just thought I would mention in case anybody is unaware. Midnight Commander (mc from the command line), a textGUI file manager, is built-in and isn't difficult to figure out, google it.
     
    And of course, there are file manager docker apps.
  7. I'm trying to use this script to mount multiple shared drives. From everything I've read here I should just be able to run multiple version of this script. But when I do I get the following error:

    Failed to start remote control: start server failed: listen tcp 127.0.0.1:5572: bind: address already in use

     

    I have rebooted. I'm only mounting the mergerfs in one of the versions of the script. Whichever script I run first will work but I can't get a second to work due to the above error. 

    Any clues?

  8. Thank you both @trurl and @itimpi . Just to confirm. I can pull my parity 2 drive and my data drive out, swap them and then do a parity copy (Step 13). During this procedure, I still have dual parity and I'll only really have one "failed" disk?

     

    So if I understand correctly. Unraid knows the data disk is actually the old parity 2 disk, it then copies the old parity 2 data to the new parity 2 disk (being the old data disk). Once done I then have parity 1 in tack as it hasn't been touched. Parity 2 (new) has been copied from parity 2 (old) and parity 2 (old) still has the parity 2 data on it. So essentially in this state there is 2 parity 2 disks that are clones of each other. At this point I now have 2 valid parity disks and 1 "disabled" (old parity 2 drive that has become a data drive) disk. Unraid then rebuilds the data onto the "new" data disk.

     

    So I maintain dual parity, with one device failure until the procedure finishes. So I would need to lose 2 other drives (3 in total) to have any data loss, correct?

     

    Though now looking over the guide again, I don't understand how to correctly remove 2 drives at the same time and unraid being able to track which is the old parity. Unless I'm overthinking it. 

  9. I would say using a disk that has increasing reallocated is the most risky plan.

    Why not just keep that "third drive" in the array?

    That's kind of confusing advice as the "third drive" is the one with increasing reallocated. But it's not part of the array as I had said above. It's a spare drive sitting on the shelf. That I could temporarily introduce to the array to aid in this swap. Because as you said you would need a third drive to maintain parity. Though technically with 2 Parity drives I probably could maintain Parity without needing a third drive but I wouldn't be maintaining dual Parity during the procedure.
    Simple approach would be to unassign disk22, reassign that disk as parity2, rebuild parity2, assign new disk as disk22, rebuild disk22. Since you have parity1 then everything will continue to work while disk22 is not assigned.

    I figured I could do that but wouldn't that be risky? At that point if any drive fails I don't think I would be protected. In that situation,

     

    Drive 22 would be unassigned/pulled/emulated. "failure 1"

    Parity 2 would be unassigned/pulled is now invalid. "failure 2"

    So in essence I've lost a drive (22) and Parity 2.

     

    I would assign them to the slots I want them in.It would then rebuild disk 22, then rebuild Parity 2, I assume? Or the opposite of that build Parity 2 then disk 22 like you said.

     

    Either way during that procedure, if anything fails it will be lost, correct? As that would be 3 "failed" drives.

     

     

    Only reason to bring the unhealthy disk into the picture was then this process would be possible:

    Removed disk 22 (red) and replace with unhealthy disk(blue) . Unraid rebuilds disk 22 and now the array is healthy.

    Remove Parity 2 (gold) and put in the disk I want there (which happens to be what used to be disk 22(red) ).

    Let unraid build Parity. Again now healthy

    Then remove disk 22 (blue) and put in what used to be Parity 2 (gold) into slot 22.

    Unraid rebuilds the drive. Array back healthy, disks where I want them and the unhealthy drive (blue) is not in the array.

     

     

    At least that way if the unhealthy drive dies I still have dual Parity during the process. With they only most risk being when rebuilding Parity 2.

     

    I was hoping to have the better level of protection without rebuilding 3 times.

     

     

    I guess I was just looking for if there was another way or if it is indeed possible to remove 2 drives from the array and swap them. As there would be 2 missing drives at the same time, with one being Parity. I also wasn't sure if I was to do "option 1" and just swap the disk that the procedure would actually be.

     

    As if unassign disk 22 and move it to parity 2 wouldn't unraid not start as it would misconfigured and there for would have to do new configuration. Of I do new configuration with 2 drives unassigned (or intentionally failed is another way to look at) won't that loose all of the data?

     

     

    Outside of the main question here but, would it not be technicality possible to do what I suggested? Just maybe not currently?

     

    Tell unraid that you plan to use disk 22 as Parity 2. Unraid makes a clone of Parity 2 onto disk 22. This would take disk 22 offline and it would have to be emulated. But the advantage is only 1 disk is "failed/offline" during this cloning. Stop the array. Assign the now cloned Parity 2 (disk 22) into Parity 2. And trust Parity. Then you could replace disk 22 with the drive that used to be used as Parity 2. And then let unraid rebuild the data. I can see that the array may not be able to be usable during the process but it would be safer.

     

     

     

  10. Basically the same reason and situation as the OP but I have dual Parity, don't think the OP did/does.

    I have a data disk that would be better used as a parity due to it being faster.

    I was interested in the procedure to make that swap/move. And in this situation there is no other available drive.


    Situation 2. Still looking for the same outcome but with a third drive that could be used temporarily during the swapping/moving. Unfortunately the only drive I have around isn't in great shape (increasing reallocated sectors) but it still functional and passed preclear and smart testes. It's not part of the array, it's just a spare drive on the shelf. The array is currently healthy.

    So the second situation is a separate question. What would the procedure be if I had a third drive that could be used during the process but I don't want it used in the array when the swapping/moving is all done and the data and Parity drives are where I want them.


    Simply put
    Parity 2 (red) becomes data disk 22
    Data disk 22 (gold) becomes Parity 2

    Reason I mention 2 different scenarios is I of course would prefer the one that keeps the array the most protected the longest.

  11. Hope the OP doesn't mind me hijacking this thread. But I have basically the same question/situation.

    I have a data drive I would like to swap to my Parity 2 location.

    Sounds like there isn't away to do this without at least loosing one Parity drive during the process.

    What would be the safest way to do this, without a spare drive?

    What would the process be with a temp spare drive? I have drive that isn't healthy so I would want it in the array for the shortest period of time.

    The procedure with the temp spare drive I think is simple enough but there would be 3 rebuild/Parity checks which is a quite a few.

    I know it's not a common procedure but would be nice if unraid could handle these situations a bit better. Thinking out loud but if unraid could copy (in my case) Parity 2 info onto the data disk I plan to move to the Parity 2 location. I'm aware that, that data drive would have to be taken offline and the contents of it would be emulated. Then when that process of copy Parity 2 to the data drive finishes, move the data drive to parity 2 and Parity would then still be valid. I could then take my "old" Parity 2 and put it in the data slot and have it rebuild. Unless this is somehow already possible?

    • Like 1
  12. Ever since I started using unraid (2010, v4.7), the community is what has been the leader in adding enhancements/value/features to unraid. Initially unraid didn't do much more than being a NAS, which was exactly what it was designed to do. But the community was the ones who started to build plugins and guides on how to use them to add way more functionally then the core function of just a NAS. That's not to say limetech was lacking really. They did and do build a rock solid NAS and that was the motto for the longest time. But a lot of user, and definitely myself wanted to keep utilizing the hardware more and more, way beyond just a NAS. The community really thrived at that. I've been able to add much more functionality to my system, learn and use more and more programs thanks to unraid/limetech and the community. 

     

    I've really enjoyed the community here, it has felt small and personable and while its growing it still has the small tight knit feeling. A place where people give great support and never makes anyone feel like they have asked a stupid question. Unraid is where I've learned a lot of things about linux dockers etc. Again thanks to the community.

     

    I'm sure I'm just stating the obvious here at this point. Not to say unraid is worthless without the community but it is what helps or maybe even the cause to why it thrives. 

     

    On a somewhat selfish note, I have very much benefited from the work of all the community devs especially @CHBMB and @bass_rock with their work on the nvidia plugin. For the people that use it, it was a huge breath of fresh air for older systems running plex. And it is definitely not exclusive to older systems. I also used the DVB drivers that was built by them. Unraid/limetech really didn't seem interested, at the time in providing support for the nvidia drivers and @CHBMB and the @linuxserver.io team saw how much the "niche" (I honestly don't think it's that niche) of people really wanted and would really benefit from it. I was ecstatic when it as finally released. I'm not sure where the notion came that the "third party" kernel was unstable as @CHBMB and the @linuxserver.io team work really hard at releasing good dockers/plugins etc. With very good support and documentation. It's such a no brainer when going into CA Apps and seeing something maybe by @linuxserver.io, it's always what I'll install given the choice.

    I had mentioned this in the request thread for GPU drivers in the kernel to @limetech that having the ability for hardware acceleration in plex via nvidia cards would attract users. I'm sure it did and likely attracted users for other reasons once @CHBMB and @bass_rock got it working. Limetech didn't ever comment in that thread and from what I remember was not at all interested at the time in pursuing it, at least from what I could tell from public information. Yet @CHBMB and @bass_rock worked hard to make it happen and as I mentioned I'm sure that created revenue for limetech and I know made many people happy.

    I'm glad limetech has decided to build in the feature and support it at the company level. And really this should be a big win for both @limetech and @CHBMB
    . For limetech because it's a great added feature and for CHBMB to give him a break from having to support that plugin which was clearly a lot of work. Maybe even freeing up his time to work on other great things for the community. While I'm glad he is finally getting a break. I do hope he and any other developers that may have decided to leave come back, as I do really appreciate their work.

    I also hope that this situation actually ends up being more positive then negative. It looks like limetech as learned more now, then they I'm sure already knew, how valuable their community developers are. And hopefully more communication will help build a stronger better relationship with the community and its developers, because the combination of the limetech team and the community developers has created a ever evolving fastastice product.

     

    Thank you to both the limetech team, the community and all the community developers.

    • Like 6
  13. Except that it doesn't delete any backups until a successful backup is completed.  But yeah, retain x number of backups will partially alleviate this if you catch it prior to the threshold being reached

    True but the successful backup that it completed is unfortunately not a backup of a full working system. And in this case basically blank. I was lucky I did a manual for copy at the start so it wasn't a complete loss. I usually also keep (by renaming the folder) a permanent backup that CA backup/restore creates once in a while copy in case I don't catch something that's gone awry within the deletion period, so I can go further back if necessary.

     

    But yes with a delete backups every 60 days and say a minimum of 2 backups kept it would definitely keep a good backup long enough.

     

    In that same right, keep every nth backup indefinitely would be nice too but I can understand not wanting to put in too many options.

     

    Thanks squid for the great app!

     

  14. Short version: If the server has been off for longer then the number of days set in: "Delete backups if they are this many days old" and CA Appdata Backup / Restore runs a backup on a schedule, it will delete all the backups.  
     

    Solution: Probably a good idea to have a setting for minimum number of backups.

     

    How I came about this issue.

    My cache got corrupt and my dockers stopped working. Since I didn't have time to fix it and didn't really need my server running when the containers weren't working I shut it off till I had time to work in it. While working on it I've left it on overnight. Came back to it today and my backups have been deleted. My setting are set to delete after 60 days and well my server was off for over 60 days.

  15. ^ That's all about right. But it's basically up to cost/performance. For the most part, using up more slots the cost less and has the best performance . For example:

    3 x IBM M1015 (LSI 2008 chipset) Max available speed per disk is 320MB/s. Cost is about say $90 ($30/each) But uses 3 slots. 

    2 x IBM M1015 (LSI 2008 chipset) + 2 Intel RAID SAS Expander RES2SV240 Max available speed per disk is 205MB/s Cost is about $300. $60 for 2 M1015 and $240 for 2 RES2SV240. Uses 2 slots, the RES2SV240 can be powered without using a slot.

    2 x LSI 9207-8i chipset + 2 Intel RAID SAS Expander RES2SV240 Max available speed per disk is 275MB/s Cost is about $340. $100 for 2 LSI 9207-8i and $240 for 2 RES2SV240. Uses 2 slots, the RES2SV240 can be powered without using a slot.

    1 x LSI 9300-16i + 2 RES2SV240 (not sure if this is possible) Max available speed per disk is 275MB/s Cost $540? ($300 for LSI 9300-16i and $240 for 2 RES2SV240) Uses 1 Slot.

     

    Or if you have a case with a SAS expander built in, that can save a lot of headache and trouble. Quickest and nicest wiring job also. For example if you get a 24 bay case with a built in expander. In single link with a SAS2 expander connected to a single IBM M1015 (LSI 2008 chipset). You have 125 MB/s max per drive (when all drives are being used simultaneously)

     

    You could also use 1 x IBM M1015 (LSI 2008 chipset) and the HP 6Gb (3Gb SATA) SAS Expander. Max available speed per disk is 95MB/s Cost is about $60 total.

     

    As far as lanes, this is the best post for explaining speeds on pcie bus. But the IBM M1015 (LSI 2008 chipset) uses PCIe gen2 x8. Which is equal to PCIe gen4 x2. So on x570 you'll only need 6 lanes to max out the cards/drives speed with 24 disks. And realistically, you only need x1 on PCIe gen4 and you'll have 185MB/s available per disk (if using 24 disks) which is only 3 lanes. So unless I'm missing how lanes are determined, lanes aren't really going to be an issue at all. It's going to be more likely a PCIe slot issue. Which in some way is related to lanes but not really.

     

    The issue you kind of run into is consumer boards tend to give you x16 slots and x1 slots. Sadly the x1 slots aren't really useful in a physical sense. Especially when it comes to HBA cards as they are for the most part x8 cards. In the server boards world x8 is much more of a common slot.  You can easily use a x8 card in a x16 slot. But you'll run out of them fast, and you might need them. I think you could use adapters from x1 to x8 without a performance hit but not 100% sure. Really just depends on what the motherboard manufacturer decides. Would be easier if they just used x16 for every slot but I doubt that will happen

     

    It makes way more sense to get the right board for what you need but this was interesting: https://linustechtips.com/main/topic/1040947-pcie-bifurcation-4x4x4x4-from-an-x16-slot/

  16. Hard to pick one thing for both so two things that tie for number one.
    1)ease of use (for the most part) much easier then it was years about. 2)unraids core feature, redundancy with any drive added at any time of any size/manufacture etc.

    Again a tie for what I would like to see in 2020. 1) easier/built in server to server backup and backup to cloud storage. 2)multiple cache pools (which I know is a planned feature)

  17. Just read the work that went into releasing the latest version. Thank you to all that are involved in keeping this amazing addition to unraid working. I've been enjoying the benefits of it ever since it was released and as we all know how great it is to have this ability. I really appreciate it.

    I searched back a bit and didn't see anything recently about the work being done on combining the Nvidia build and the DVB build. That would be the next dream come true. Is there any place I can follow the development of that build? Seems like it's getting lost in this thread.

  18. 30 minutes ago, johnnie.black said:

    It is, first parity disk will be overwritten with old parity, then disk15 will be overwritten by the rebuild.

    Thanks Johnnie. That makes sense but I had thought I had not see it shown like that (both disks as "new"/blue) the first time I did the parity swap/copy. From what I've gathered the copy operation happens then the array stops and then you have to bring it online to rebuild.

     

    So during the copy, the old parity (which in this case is now in slot 15) is writing to the new parity in the parity 2 slot. Disk 15 during this operation isn't getting written to. Now after the copy operation happens the array needs to be started, when the array is started that's when the data on disk 15 is overwritten. I know this is minor details but having the correct information in the GUI would help understand what is really happening.

     

    Started the parity copy now, and I won't be doing any changes until I start the array this time. Thanks to both of you.

  19. 7 hours ago, johnnie.black said:

    Yes, any interruption of the parity swap procedure requires staring over, same if you reboot/powerdown after parity copy.

    Ah ok that's exactly where I went wrong. Thanks. Maybe adding a warning to the wiki or GUI might not a be a bad idea.

    3 hours ago, trurl said:

    Just to clarify, and possibly make a point that might have been missed.

     

    People sometimes use words differently. I try to say "port" when I mean how the disk is attached, and "bay" for the physical location of the disk within the case, and "slot" for the actual Unraid disk assignment, since that is how Unraid refers to that in the logs.

     

    When you use the word "slot", do you mean the actual Unraid disk assignment? Rearranging those invalidates parity2.

     

    You make a good point and I was using the wrong terminology. Which I'm usually a stickler for so thank you for putting me straight. I edited my last post to reflect the proper wording.

     

     

    Since I know I'm starting over, I'm going to perform the parity swap/copy again. The server is currently in the above state waiting to start the process. But I want to confirm that seeing "All existing data on this device will be OVERWRITTEN when array is Started" on both disks is the expected behaviour. It really feels like it shouldn't be what is expected.

  20. That's it nothing else happened. But I haven't talked about rebuilding parity. 

     

    Parity 2 was in bay 2

    Disk 15 was in bay 19 (The replacement for the dead disk 15)

     

    Parity swap/copy was performed.  All I had to do was hit start and unraid would built the data onto disk 15.

     

    But before I hit start I moved the disks

     

    Parity 2 (which after parity swap became disk 15) was moved to bay 15

    Disk 15 (which after parity swap became Parity 2) was moved to bay 2

     

    Unraid then marked both disk as new (blue square) and said "All existing data on this device will be OVERWRITTEN when array is Started" on both disks (parity 2 and disk 15)

     

    The only way to correct the above issue is to change the assignments back to before the parity swap happened. If I assign disk 15 back to parity 2 slot and unassign disk 15 from slot 15 then unraid reports "Configuration valid'. 

     

    So I think the obvious process would be to basically start over. Start the array without disk 15 assigned to a slot. Stop the array, add the new disk and perform parity swap/copy. Then start the array as I should have done. Without moving anything.

     

    What I think happened is unraid saw the disks disappear when the drives were pulled out. For some reason it seems to have forgot the parity swap/operation happened. I suspect if it had powered down the server then did the move unraid would not have cared. But that's only a theory.  

     

    Not sure if something weird just happened or this is a "bug" or a note should be added to the procedure. It would need to be replicated of course but it would be simple. Do a parity swap then unplug and replug while powered on and see if unraid forgets that the parity swap happened. So the note would just be to not make any changes until the array is started at least once.

     

     

    Well I decided to try and start over. Started the array with a valid config which like I said was going back to before the parity swap. Which is disk 15 back to parity 2 slot and disk 15 unassigned. With the array up everything was fine disk 15 was emulated and I have valid parity on both parity drives. Now I was going to start the parity swap again but I don't know if this looks right. I didn't think disk 15 had the "data will be overwritten" warning (disk 15 being the "old" parity 2)

     

    Please confirm this is expected on a parity swap and I'll proceed. But it looks wrong and like I would have data loss. (loss of data of disk 15)

    image.thumb.png.f9e37566c3936a79fbf95142ffab6da4.png

     

    *Note, this is what I saw after moving the disks around and why I was worried to start the array. As the data on disk 15 (old parity 2) needs to be copied onto parity 2. If the data on disk 15 is overwritten then the parity info is lost. So this looks wrong.

     

    The ironic thing here to me is both disk 15 and parity 2 are currently identical as the parity copy operation finished. Assign either to parity 2 should be fine but unraid won't allow the new drive to be assigned.

  21. 5 hours ago, trurl said:

    Do you have Notifications setup to alert you immediately by email or other agent as soon as a problem is detected?

    Yup, that's how I was aware of the drive failure. ***

     

    So I decided to do something I guess kind of dumb and now I'm not sure how to proceed again. I think I know but would like to be sure.

     

    I did the parity swap, and unraid copied the data from my old parity to my new parity with the array offline. That operation completed and then I was presented with a stopped array. I had valid parity and starting the array would start the rebuild of disk 15 (overwriting my old parity). That's all well and good.  But I moved the 2 drives around the server physically, with the server on and array stopped (as I have hot swap bays). That pissed off unraid.....In hindsight I think if it did it with the server off it would have been fine and moving them physically shouldn't mater to unraid correct?

    So if I assign my old parity back to the parity slot and leave disk 15 unassigned its happy. It's back to before I started the parity swap and copy procedure. I then start the array, then stop the array then continue with the parity swap/copy procedure again.  I would suspect this will work 100% as it will just copy the parity again (unnecessarily really). After the copy is complete, start the array and let it rebuild disk 15.

     

    But I think there is a way to assign the new parity drive and tell it to trust it. Then build disk 15. 

     

    Essentially I know I have 2 parity disks that are the same due to the parity copy procedure. My new parity and old parity drives should be identical. But unraid wants to "All existing data on this device will be OVERWRITTEN when array is Started" on my new parity, even though it should be valid since its a clone of the old parity. I'm about 99% I can't use new config as I'll loose parity and the data on my failed disk. I was thinking more of a trust parity but I don't see that option.

     

     

    Option one seems a lot safer but a bit longer so I'll likely end up there but would like to know if option 2 really exists.

     

     

     

    ***

    Side note, I was just thinking it would be nice to have a history of "acknowledged" drive stats. Since you get a notification of say a reallocated sector, say 5. And while that's not great as long as it stays at a steady state then not a big concern. But then some time later you get a notification for 6. You acknowledge and move on. What I'm getting at is it's hard to keep track of if the drive health is slowly diminishing if you don't recall when you have acknowledged the errors. Sure there are other solutions but a history of the error and when you said "yeah I saw it" would be nice to have in the drive information page. 

     

  22. On 9/4/2019 at 7:46 AM, trurl said:

    That isn't true. Diagnostics usually contain SMART for a disabled disk as long as the disk can be communicated with.

    Well I was referring to the GUI, going into the properties of the drive shows nothing, says disk needs to be spun up but clicking spin up does nothing. 
     

    Anyhow, moved the drive just to see and after rebooting I could get the the attributes. This disk had 

    Current pending sector 618

    Offline uncorrectable 617

     

    Not sure if I knew it was high before and acknowledged it and when on my way wondering how much longer it would last or it shot up quickly.  But I'm pretty sure it was the former. Not surprised knowing the drives history. 

     

    Parity swap running now, so far initializing it was easy, now its just time to wait. Thanks guy and thanks limetech/bonienl for making the was an easy task in the gui.