wildwolf

Members
  • Posts

    31
  • Joined

  • Last visited

Posts posted by wildwolf

  1. Currently running sickchill on a synology, but that presents a msg that it's no longer supported. I have an unRAID server, and found this thread. Is this a similar sickchill? Same sickchill?
    Running in a docker on unRAID seems desirable to me. I think I could then get rid of my synology. Is there a good howto/video for installing sickchill on unRAID docker from scratch? I'm assuming I can't easily port any thing over and would just need to start from scratch.

  2. Hi, maybe I just have higher expectations than reality, so I need to ask how this should work or if I potentially have something wrong. I have a pretty decent system, albeit a few years old (64GB RAM, i7-6500K. I have a 2 port 10G card via unifi 48-port Enterprise POE switch, managed by a Unifi Dream Machine Pro. I have 1 line wired to 10G switch for my LAN network (192.168.1.1/24), and the 2nd line wired to my switch for my WiFi network (192.168.30.1/24). 

    If I transfer a large amount of data (currently, 1TB) via the 192.168.1.1/24 network, it does the typical SMB thing - runs at about 38-50 MB/sec speeds (unRAID is encrypted, so I assume this slows things down), then eventually runs out of buffer? slows down and catches back up for a few more minutes. The windows file transfer icon tells me about 4 hours to finish the transfer. 

     

    In the mean time, my daughter is trying to watch some kids shows on Plex. Almost immediately, it buffers like crazy and can't keep up - in short, it is unplayable. I was hoping by upgrading both lines to the switch to 10G that I would be able to transfer files at max speed on 1 line (wired) and plex would still be able to feed the wifi clients a decent amount of speed on the 2nd 10G line. Is it because they're still on the same switch,  even though they're also on different lines and on different VLANs and the transfers over that switch are still the limiting factor?

    Is the limiting factor the CPU and the power encrypting 1TB of data to store on a share and not network throughput?

     

  3. Okay, hopefully last question, to make sure I have the reorder work correct.
    Here is the current layout (see pic). I am trying to remove the gap in numbering on the GUI now.

    From my understanding, these are my steps:

    1. Stop array.
    2. Tools --> New Config

    3. Click Preserve current configuration, click all, then click close. 
    4. Go to Main page.

    5. Un-assign 3AYC as disk 8, assign it to disk 5.
    6. Un-assign 5PNC as disk 9, assign it to disk 6.
    7. Un-assign Parity 2.
    8. Check box, Parity is valid.
    9. Start array.
    10. After it starts up, shut down again and reassign parity 2 drive to parity 2 spot/slot.
    11. Start array - and rebuild parity 2 will commence.

    Is that accurate?


     

    drive-layout.png

  4. Just now, trurl said:

    Label each drive with the last 4 characters of the serial number. It is the serial numbers you need to depend on when trying to decide which disk is which, because that is how Unraid is going to decide which disk is assigned to which slot. Many drives already have serial number on a small label on the end of the drive opposite the connections.

    They're already labeled by manufacturer (WD drives are awesome for this!). Just wondering if there's any protection or performance benefit to separate the parity 1 per cable, or if not, I'll just slap them into a slot to match what the new config to be - labeled top to bottom 2 parity and 6 data.

  5. Thanks for all the help guys. Card removed, down to 1 card, 2 breakout cables, and all is looking a little better to me.
    Now, off the wall question. My card (9211) has 2 connected cables with 2 drives connected each. Is there any benefit to having 1 parity drive on each connected breakout cable? Either performance or parity/data safety? Or no real world difference, just slap the drives where it's most convenient/easier to maintain/know where your drives are?

    When I originally built, I put 1 parity per cable, thinking it might be safer. However, I now realize that may not be true and wanted to ask if it matters.
    The system has 8 drives mounted vertically (Fractal Define R5), might be easier to know which drive is which if I put Parity at top, Parity 2 below that, Data 1 3rd, Data 2 4th, and on down the line. But is there any real detriment to doing that? Or would it be better to still separate parity drives per card's cable bundle?

  6. 3 hours ago, JonathanM said:

    To change disk numbers in the Unraid array layout, you must use the new config function and build parity2 fresh.

     

    To physically move drives between controllers requires no changes in the Unraid software, and the drives will stay in the same disk slots in the Unraid array layout. They will not change places in Unraid.

     

    Physical location and disk number assignment are two totally separate concepts in Unraid, they do not interact. At all.

     

    I think I finally understand. I can remove my other card, rearrange physical disks as I want. But my setup will still remain:

    P1
    P2
    D1
    D2
    D3

    D4
    D8
    D9

    until I do new config, which will then require rebuilding new parity 2 to 'close the gap' in the GUI to:
    P1
    p2
    D1

    D2
    D3
    D4
    D5
    D6
    Is that correct?

  7. 12 hours ago, JonathanM said:

    Physically rearranging disks doesn't require any configuration changes or rebuilding of parity. Unraid tracks disks by serial number, it doesn't care which cable they connect to.

     

    Rearranging logical slots in the management GUI is what you are referencing in the reorder disks thread.

     

    Totally different and unrelated concepts in Unraid.

     

    This is where I'm confused then. I want to accomplish both. I want to physically rearrange the 8 disks that remain, by swapping cables & such around so they are all on 1 controller card, instead of 2. Plus, I also hope to "close the gap" in the GUI layout.

    Or, will physically rearranging things (thus eliminating the 3 empty slots by eliminating all the extra slots from that 2nd card when it's removed and consolidating down to 1 card) automatically cause the drives to show up in all the right places, as long as I have parity  & parity 2 identified as the correct 2 drives? Are you/others stating I/anybody could (in my view below) move disk 9 to the cable connector that is disk 5 (different controller card), and it'd still boot up with all the data intact, without having to run new config or do anything else?

    8devices.png

  8. 4 hours ago, trurl said:

    If you manually run a parity check there is a checkbox you have to check to make it correct parity errors, so you should know whether or not you checked the box. All that history is showing is that it found a large number of sync errors and doesn't tell you whether or not they were corrected.

     

    Post new diagnostics if you haven't rebooted and we can see from syslog whether or not it was correcting, but as I said

    even though technically only parity2 would need rebuilding to close the gap (assuming parity1 is valid which is in doubt).

    Yes, it was checked, so I assume it was correcting parity errors, and not some other type of errors.

  9. 39 minutes ago, trurl said:

    Was that a correcting parity check?

     

    Doesn't really matter since it is simpler to just go ahead and rebuild both parity when you New Config to close that gap.

    Yes, I'm pretty sure, but have no way to know?

    So, as long as I rearrange SAS cards, put drives back in the same order (1-2 parity in serial # order, 3-8 in serial # order) in new slots 1-8, I should be good?

    Added screenshot of parity checks - this is the only time I've ever received errors on a parity check, so my assumption is they are 'correcting parity check' errors.

     

    paritycheck.png

  10. 2 hours ago, trurl said:

    You mean because you want to "close the gap" left from removing 5,6,7?

    Correct, and I want to reduce from using 2 SAS cards down to 1 SAS card.

    Also, when I 'cleared' the last 2 5TB drives, the script just stopped working - on both instances. (I ran and tried separately). I went ahead and removed the drives and changed configuration each time because I had already moved the data off each drive using unbalance.  I assumed it failed, and that my parity, even though it said valid wasn't valid.

    After I got the 2nd one out, I ran another parity check to be sure. It just finished:
    Last check completed on Fri 19 Nov 2021 11:39:53 AM EST (today)
    Finding 760363348 errors Duration: 1 day, 12 hours, 51 minutes, 44 seconds. Average speed: 105.5 MB/se

    I assume because I was right, I removed and it didn't finish/update parity along the way when I tried to clear. However, I do appear to still have same size used array, all my larger drives in place, and I just finished the parity check so it all should be good now.

    I just need to consolidate the cards/cables down to 8 lines of a single SAS card now.
     

  11. Thanks again, Jonathan & trurl. This has indeed speed up the process. 
    I can't seem to find the "yes I'm sure" checkbox anywhere, but so far, everything seems to be working smoothly.

     

    Sorry - that's in the "Replacing a Data Drive" unRAID wiki: https://wiki.unraid.net/Replacing_a_Data_Drive

    I have noticed a small discrepancy in some (very little, a key word or two) of the text in the wikis, but made my way through it.

    I am almost done, and I'm sure someone more experienced might have done all this faster. I do have more questions, though.
    Currently, I'm sitting here:
    P1 14TB
    P2 14TB
    D1 14TB
    D2 14TB
    D3 8TB
    D4 8TB
    D5 (removed)
    D6 (removed)
    D7 (removed)
    D8 8TB
    D9 8TB

    I have 2x SAS9211 cards. 
    Card 1 has 2x SFF8087-4SATA cables. I can't look right now, but I believe:
    1st cable has 4 drives (1 of which is parity)
    1nd cable has 2 drives 


    Card 2 has 1x SFF8087-4SATA cable.
    1st (only) cable has 1 drive (I think this is the other parity drive).

    I know for a fact that I can trace my cables to drives, look at my serial numbers, and determine which 2 drives are the 2x 14TB (and which is #1 and which is #2).

    I'd like to remove card 2 from the system. Hook up all 8 drives to card #1.
    I think (someone tell me if I'm wrong?) it would be smart to have
    1st cable - probably 1st drive parity, other 3 data drives
    2nd cable - probably 1st drive parity, other 3 data drives

    Is there a safe/easy way to do this? I understand I'll have to rebuild parity drive #2 from this thread: https://forums.unraid.net/topic/54221-reorder-disks/

    Is it as simple as disconnecting everything, removing my 2nd card, attaching drives as I've indicated to the 8 slots that are left (in the order I choose), identify both the parity drives, and the other 6 as the data drives, and start array?
     

     


     

  12. 49 minutes ago, JonathanM said:

    Why not just build parity to the 2 new 14TB with all your current data drives,

    then rebuild data1 and data2 with the other 14TB drives,

    rebuild data3 and data4 with the old 8TB parity drives,

    and all that would remain would be copying the content of the remaining 3 5TB drives into the free space of the 14TB drives,

    set a new config and rebuild parity in the final layout?

     

    Seems like much less work than what you currently have laid out, and all the removed drives would still have copies of the data for backup instead of wiping it out.

    So, the recommended path is what?
    1. Just pull both 8TB parity drives, stick in 2x 14TB drives label as parity & rebuild parity (do I need to do anything in tools new/config to do so?)
    2. Once parity rebuilt, replace 2 data drives with 2 x 14TB drives, rebuild from parity? Should I do them 1 at a time? What sequence/steps should be followed to ensure no data loss?
    3. Once both are done, repeat using the 2x 8TB to replace 2 5TB.
    4. Move data from the final 3 to the other drives.
    5. Once done, set new config again and rebuild parity 1 last time with the down to 8 drives?

    I guess I'm not familiar/am unsure about how to just pull a drive & drop in a bigger one & still maintain all the data. Let me go find some more documentation.
    Thanks for suggestions.

    I'd still be interested in answers to my original 1 - 4 questions to see when/where in the process steps are changed/corrected back to 'normal' process.

  13. Good morning. The past couple days I followed the shrink array docs here https://wiki.unraid.net/Shrink_array (clear drive then remove drive method), to use unbalance to move contents from 1 drive (disk 7 of 11 with 2 parity drives) and scatter it to a few of the other drives. I have a lot of array restructuring* I want to do, and this was a 1st attempt to see if I have the process down. I think I was mostly successful, with some user/novice errors, and I'd like to question what to do to see if I can make the next iterations better. * = I have 2 parity drives & 9 data drives and I would like to reduce my 9 data drives to 6 so I can utilize only 1 SAS card instead of the 2 I'm currently using. I had (in order of Parity --> Data1-9), 

    p1 8TB

    p2 8

    d1 5

    d2 5

    d3 5

    d4 5

    d5 5

    d6 5

    d7 5

    d8 8

    d9 8.

    I removed disk #7, as it was the least full. My misunderstanding/problems (self-inflicted) started after I completed step 16. I know it was recommended to do a parity check. I started one, and decided instead I wanted to remove the drive, so I stopped array, powered down, and removed it. After it started back up, the configuration showed my drive 7 as unassigned - should I have re-ordered assignments when I did Tools then New Config? I started to do a parity check then, but noticed it was stating that drive 7 was emulated. Didn't think that was what I needed to see - since I didn't want that drive in the array at all? I also noticed I hadn't yet changed all my shares to use "all drives" as the instructions had asked to remove the 'removal drive' from all shares on step 1. 

    So, in a panic, I selected Tools then New Config again, reset the drive assignments back to how they were (no #7 after removal), and forgot to check parity is valid, so now my parity is rebuilding for p1 & p2 drives. I'm assuming (since all my data still appears intact, that I should be fine, I'll just have disks d1 - d6, then d8 & d9 in my array when complete.

    I've got a few more drives to do this with, and then upgrade to some larger drives (lots of WAIT time coming up again).
    I guess my questions are now:
    1. When finishing step 14 & 15, do I reassign the drives to the new slots then?
    2. To get the array using sequential # slots again, after this parity rebuild is complete, can I just stop array and rearrange assignments to ignore the #7 no drive slot?

    3. When in the process would I modify all the shares again to use any missing drive I removed from shares?

    4. Step 4 of instructions have us turn on reconstruct write, when do we change that back (and I didn't capture what it was, like an idiot, so what do we change it back to?

     

    I have 4 new 14TB drives I plan to incorporate into this mix. I will eventually be replacing p1 & p2 with 14TB drives, re-using those old parity drives as new data drives replacing 2 of the remaining 4 5TB drives, and repeating the shrink array process by removing the remaining 3x 5TB drives. The array, when I'm done (hours, and hours, and HOURS of data moves later...), should like like:
    p1 14TB
    p2 14TB

    d1 14TB

    d2 14TB

    d3 8TB

    d4 8TB

    d5 8TB

    d6 8TB
    (eventually shrinking from 9 data drives to 6, but increasing array size from 51TB to 60TB).

    I've got about 11 hours before the parity rebuild is complete. I'm sure I will take the slow road to get there, but as long as I keep the data intact (and hopefully with less errors like where I am currently rebuilding parity), I'll be fine with the lost time.

     

  14. Hey everyone. I currently have 2x SAS9211-8i cards connecting 11 drives (2x parity, 9x data).

    This takes up my 2 x8 pcie slots, and forces my 3rd pcie slot to run at x1.

    I'm wondering if there exists a card that would allow me to hook up all my drives on 1 card, freeing up the 2nd x8 slot for the device I have in the 3rd slot - allowing it more bandwidth.

    Any recommendations?

  15. What does "Retain current configuration" mean? Aren't we changing the configuration by removing a disk?

    For the 1st method you mention, I can just "pull a disk" and tell it to rebuild? What is "clear drive method" if it's not what I thought I was explaining? Won't it still be expecting that disk to be re-added at some point?

  16. I've read a few posts on this, most of them older, so for my sanity I wanted to ask outright if I have this logic correct.

     

    I have 11 array disks , 2 of which are parity.

    The 2 parity disks are 8TB each.

    Of the 9 data disks, 2 are 8TB (#8 & #9), the other 7 are 5TB (#1, 2, 3, 4, 5, 6, & 7) each.

     

    Of the 51TB array size, I'm only using 26.3TB.

    2 of my 5GB disks (#7 - 1.15TB and #5 2.29TB) I would like to remove from the array and use them as unassigned devices so I can use 1 of them for a CCTV system.

     

    From my understanding, I need to:
    1. Do a parity check. If all checks out, proceed.

    2. Use unbalance to move all the data from #7 disk & #5 disk to the remaining disks. This should leave these disks 'empty' and ready to be removed from the array.

    3. Stop array.

    4. Choose Tools --> New Config. Make sure BOTH parity disks are reassigned back to same slot and order they were previously assigned in the new config.

    5. Assign the remaining disks (#1, 2, 3, 4, 6, 8, & 9) as data disks, which would now be array disks #1, 2, 3, 4, 5, 6, & 7) in the new config.
    6. Rebuild array.

    This would leave my array with about 26TB in a 41TB array, for about 65% capacity. As I grow more, I could just swap out the remaining 5TB disks with 8TB ones.

    If this is correct, does step #6 occur, after new config, when I "start" the new config array for the 1st time and it uses just the 2 parity disks to rebuild the array? The 2 parity drives are ALL I need to rebuild the current 26.3TB data back onto new 7 disk configuration?

    Usually, I'm impatient and/or don't read enough fine print and I have been known to do things the wrong way causing my own data loss. I'd like to prevent that this time.

    Some usual responses I've read are: why do you want to reduce the # of disks? Because my system is pretty full right now and this is, I think, the easiest method I can find to free up some of the slots to create unassigned disks for other uses, as mentioned earlier. If it were easy to drop another controller, more cables, and disks into the current system and add more drives, I would just go that route. Alas, it's pretty packed now.

    I also have overkill on the cache drives. I started with 1-500GB SSD, and for some reason, thought adding a 2nd drive would be very useful. I'm only using 10GB of cache at the moment. So, can same method be done to reduce # of cache drives as well?

     

     

     

  17. Hooked up new UPS last week. Forgot to connect USB cable to unRAID box. Power was off for 3 hours while out of town. Came home to unclean shutdown.

    Booted up and disk 1 of array (sdc with xfs) shows unmountable: no file system.

     

    Last parity check was done on 4/6/19. Next one scheduled for 5/6/19. Hope I can recover file system/data. It is a 5TB drive. I can acquire another 5TB drive to use as last resort to rebuild from parity, but last time I tried that (I am impatient), I messed up and lost the data permanently. 

    Would appreciate suggestions this time before I go pressing buttons.

    hitower-diagnostics-20190427-2204.zip

  18. Currently at about 69% for data rebuild of 2 of the 3 drives. Guess I'll figure out what's up with the 3rd drive when it stops. It says, "Unmountable: Unsupported partition layout" right now.


    Wanted to thank everybody for the information that data was likely still there, giving me courage to start the array again. Not sure what happened, most likely scenarios are:

    1. controller card not fully seated, or the SAS cable wasn't fully seated, for that controller card. It's the one that has the 3 drives in question connected.

    2. maybe hard drop/power outages caused drive/controller corruption/breakage - I mean it at least took out a motherboard also, so could be possible.

     

    Maybe another question for my learning? Is it because I have 8 other drives & 2 parity drives, or just 2 parity drives, that allows me to rebuild these 2 drives (with a 3rd one also not in the current array?). I thought I would be limited to 2 drive failures at 1 time, but essentially, I had 3, right?

     

    Thanks again!
     

  19. Thanks, all. I had made no changes to my USB thumb drive/configuration.

    I have encrypted array, 1st attempt, I entered wrong password, then correct one. Still not sure why it took so long to mount the array. Boot time was still pretty fast.

     

    unRAID indicated 3 drives were unencrypted now, all 3 were on 2nd controller card. Both controller cards are an LSI-9211. I may have put controller card in different slot than before. I cannot recall. In troubleshooting why server would not post, I had removed all cards from system. I know which controller card was in top PCI-e slot (main 1 with 8 drives), but I may have transposed the 2nd card with network card upon mobo swap.

     

    Unraid told me when I mounted the array this time that the 3 drives on 2nd card were unmountable, volume not encrypted or unsupported partition layout. It's currently doing a parity-sync / data-rebuild (and writing at a whopping 150-180 MB/s, so that's great I think!)

     

    Another question - should I have mounted in maintenance mode & not mounted file systems to do this instead?

    After about 5-8 minutes, rebuild is at 2.1%, so I think that's moving quickly at least.