Jump to content

AcerbicMaelin

Members
  • Posts

    17
  • Joined

  • Last visited

Posts posted by AcerbicMaelin

  1. Okay! Strange hiccup.

    1. I finished copying all the files over to the 8TB drive using rsync
    2. ssh'd into the server, can browse around in that 8TB disk while it is mounted with Unassigned Devices as /mnt/disks/WDC_WD80EFAX-blahblah and see all the files
    3. Unassigned Devices shows that disk as formated with xfs, and so does "df -T" (listed as /dev/sdb1)
    4. Stopped the array, went to Tools>New Config, Preserve Current Assignments: All, Yes I Want To Do This, Apply, Done
    5. Went to Main tab to configure the array, all disks showing the 'blue' icon. Selected dropdown for Disk 1 which was unassigned, set it to WDC_WD80EFAX-blahblah
    6. Press Start to start the array, "Parity disk content will be overwritten", Proceed
    7. But now in the array devices section it reports disk 1 as being "Unmountable: No file system", and down the bottom in the Array Operation it says "Unmountable disk present: Disk 1 - WDC_WD80EFAX-blahblah" and offers to format the disk.

    I tried again, but this time when I was assigning Disk 1 to be the WDC-WD80EFAX-blahblah, I clicked on the disk and manually set its file system to 'xfs' instead of 'auto', but it still gives the same messages. Any ideas?

  2. 3 hours ago, itimpi said:

    As far as Unraid is concerned a format is just a normal write operation so parity is automatically updated as it is run so you would have been fine.  At the level at which parity runs it is not aware of file systems and their type - just of physical sectors on the disks.

    Ahhhhh that makes sense. Well, useful to know next time I need to do some mucking around with file systems, not that I have any reiserfs drives left now :)

  3. 20 minutes ago, itimpi said:

    Just to make sure, In step 7 you will have to do a New Config again keeping all current assignments and then add disk1 back before starting the array and rebuilding parity.     If you simply add it back without going through the New Config step Unraid would promptly start to clear it (writing zeroes) to maintain parity when you start the array thus zapping the data you had just copied.

     

    An alternative approach that bypasses using New Config would have been carry out the format change at step 4 by stopping the array; changing the disk1 format to xFS; starting the array; formatting disk1 which would now show as unmountable and available to be formatted (to XFS) and then simply copied the data back to disk1 which would now be in XFS format.   The advantage of this approach is that the array would remain in a protected state throughout.

     

    Thanks for the heads up! I *probably* would have figured out the new config thing myself but good to have a warning.

     

    I think the reason I didn't just change the disk1 format was that the disk wasn't cleared and had data on it, and I wasn't confident that just telling the server "this is disk1, it's xfs now, yes format it that way" when it already contained data wouldn't invalidate my parity and cause potential problems. But maybe I'm wrong about that?

  4. Okay, so far so good:

    1. the 8TB drive finished rebuilding
    2. I mounted the old 4 TB reiserfs drive with the USB dock using Unassigned Devices (/mnt/disks/USB3.0_Generic_USB_Device/)
    3. Copied everything from the 8 TB to the 4 TB using "rsync -avPX /mnt/disk1/ /mnt/disks/USB3.0_blahblah/" (this only took a couple of minutes, since the drives contained mostly the same data)
    4. Made a 'new config', removed disk1 from the array, started the array in maintenance mode so the array drives didn't start, used Destructive Mode in Unassigned Devices to format the 8TB drive as xfs
    5. Mounted the USB 4TB drive and the 8TB drive as Unassigned Devices
    6. In a tmux session, used "rsync -avPX /mnt/disks/USB3.0_blahblah /mnt/disks/WDCblahblah" to copy all the files from the 4TB in the USB dock back to the 8TB (currently unassigned) drive. This is the step that is currently progressing.
    7. When it finishes, I will add the 8TB back into the array as disk1, and will rebuild parity (I will very deliberately not say that parity is valid)

    I chose not to do the parity rebuild with the freshly-formatted 8TB in the array before copying the data back to the disk, because that would result in making two parity runs - once for the rebuild with the formatted 8TB, and a second time while copying the data from the 4TB. I'm aware of the slight risk of having the array unprotected in the meantime, but nothing on this system is really irreplaceable, and the data will still be on the 4TB if something goes wrong. The array disks are currently unmounted (array is in maintenance mode) so nothing can interfere with them.

  5. 7 minutes ago, trurl said:

    Might depend on how much less than 4TB it is and what filesystem the spare 4TB drive uses. I guess if it doesn't all fit you could use those other spares.

     

    Were you considering using the USB dock as an Unassigned Device?

     

    Another possibility would be to forget about rebuilding the ReiserFS disk onto the 8TB, and instead New Config and rebuild parity with the original 4TB ReiserFS disk back in place, then use the 8TB to upsize one of the XFS disks instead.

    I don't currently know how to use the USB dock as an Unassigned Device (or what the consequences of that are), so I'm not sure. Reading through the mirroring with rsync procedure on https://wiki.unraid.net/File_System_Conversion#Mirroring_procedure_to_convert_drives, it seems like if the amount of data on the 8TB drive is small enough to fit on a 4TB (I'm pretty confident it is), then it'll work.

     

    So, my tenuous current plan, noting that AFAICT there shouldn't be anything that is writing to the array during this process, is:

    1. Wait for the reiserfs rebuild to finish on the 8 TB drive
    2. Figure out how to attach the 4 TB reiserfs drive with the USB dock
    3. Figure out how to copy the data from the 8 TB drive to the 4 TB drive in the USB dock (rsync?). The 4 TB reiserfs contains mostly the same data as the 8 TB (except for anything that got written in the brief period when I started the array but hadn't yet disabled the dockers), and I vaguely got the impression that rsync has an option to be smart about that
    4. Reformat the 8 TB drive to xfs
    5. Figure out how to copy the data from the 4 TB drive in the USB dock back to the 8 TB drive (rsync?)
    6. Remove the 4 TB drive in the USB dock 

    If anyone can point out any problems with this, or help flesh out the steps I'm unsure about, I'd be super appreciative. :)

  6. 16 minutes ago, trurl said:

    Do you have good (enough) backups? You will obviously have to put the data from the 8TB somewhere outside the parity array so you can reformat it.

    There aren't any backups of the array, but there's nothing on it that's really irreplaceable, so that doesn't matter. If the 8 TB drive only contains slightly less than 4 TB of data, will the spare 4 TB drive suffice to hold the data temporarily while it reformats?

  7. 7 hours ago, trurl said:

    Maybe I missed something in this discussion but I got the impression that you might think you can convert filesystem while replacing a disk. You must format to change filesystem, and rebuilding a formatted disk results in a formatted disk. 

    I realise that now, but when I first started the process it didn't occur to me, and it also didn't occur to me that starting the rebuild on the 8TB disk would immediately invalidate the 4TB disk that it was replacing.

     

    So: we've turned off Docker, and there will be no writes to the array, and I've just started the rebuild with the 8TB disk, which is now using reiserfs. Once the rebuild is finished, we will have:

    • an 8TB parity disk
    • an 8TB reiserfs data, containing slightly less than 4TB of data
    • 2x 4TB xfs data disks, each containing slightly less than 4TB of data
    • a USB sata dock (probably two, actually, if the old one I've got still works)
    • a spare 4TB disk and probably a couple of other, smaller spare disks lying around
    • no spare HDD slots in the server (it's an HP proliant microserver, just in case that's relevant)

     Is converting the 8TB reiserfs data disk to xfs viable with this situation?

  8. 6 minutes ago, jonathanm said:

    If NOTHING was written to the array after the rebuild started, then parity should still be valid except for a few sectors.

     

    Doing a new config, being extra careful to make sure all the drives are in the correct slot assignments, and selecting parity is already valid, should get you back.

     

    You WILL need to do a correcting parity check to get back in sync, but things should be relatively close so the errors should be few.

    I'm not confident that nothing was written - we have some dockers that might have been downloading stuff or updating their internal databases or something during the minutes while the rebuild was happening, before I stopped it. Doing the new config with 'parity is already valid' seems like it might potentially corrupt things that could be a nightmare to fix.

     

    It's 4:30am now so I'll leave the thing offline and discuss with the housemate tomorrow. Maybe we just turn off any services or other stuff that could write to the drive, finish the rebuild with reiserfs on the 8TB, then use the 4TB (and some other spare hard drives we've got lying around) in the USB dock to copy the files off the 8TB, format it to xfs, then copy the files back on...

  9. 2 minutes ago, jonathanm said:

    Let me guess, you "chose" to upgrade the ReiserFS drive?🤣 I know, hindsight and all that.

     

    When was your last parity check with zero errors?

    The oldest drive was the one on reiserfs, so that's the one I was upgrading. I figured "why not also convert the file system at the same time" but I didn't realise that swapping the drives and starting the rebuild would mean missing my chance, only realised that after I'd already started the rebuild. (edit to clarify: I didn't know reiser was bad for 8TBs until after this debacle had already started :P)

     

    The last parity check was yesterday, in preparation for the drive swap. No errors.

  10. The drives are very full (only about 60GB left each on the three 4TB data drives). I'm not fussed if things take a while, but I'm concerned that my housemate might have things going that will be trying to write to the array, and I don't want to start the rebuild now and wake up tomorrow to find it now contains 4.5GB worth of stuff and the problem is now ten times harder.

     

    I've left the whole array offline at the moment and will talk to him tomorrow so we can make sure the array isn't being written to while we do it, but as a temporary measure to get everything back to the last working state, is it at least possible to go back to the 4TB drive, now that unraid thinks Disk 1 is supposed to be 8TB?

     

    (At least the good news is that only one of the drives is reiserfs! The other two are xfs)

  11. Hey gang, I think I've botched my opportunity to upgrade. I have a small server with 4 drives - three 4TBs and an 8TB parity. Note that the server only has four drive bays, but I have a USB sata dock (if that helps). I finished a parity check yesterday, with no errors.

     

    I was planning to swap out one of the 4TB drives for a new 8TB drive I just bought, so I stopped the array, shut down, pulled the oldest drive, swapped in the new one, reassigned it, and started the array. It started rebuilding but the oldest drive was on reiserfs, and I decided to take this opportunity to figure out how to convert it to xfs. So I stopped the rebuild. Mounting and unmounting was taking a long time (~15-20 mins), so before doing anything else, I decided to plug the old 4TB drive back in and get back to the original working state. But when I stopped and swapped the 4TB drive back in, it now says it can't use that as the replacement drive is too small - presumably because it now expects that disk to be 8TB (even though it hadn't finished rebuilding that disk).

     

    I guess I could put the 8TB drive back in and finish the rebuild and that will at least get everything working, but if I do that, there goes my opportunity to convert to xfs, at least until I upgrade the next 4TB drive to another 8TB. Is the situation salvageable with the file system conversion? Or did I miss my chance and I'll just have to wait until I get the next 8TB?

  12. Regarding this post in the FAQ (I'm getting this error trying to update my Plex docker app, using the linuxserver/plex image):

     

    What do I do when I see 'layers from manifest don't match image configuration' during a docker app installation?

     

    I have a theory as to why this is actually happening, unfortunately I am unable to replicate this issue so I cannot test the theory. EDIT:  I know whats happening.  The details however aren't important (its caused by the docker API itself, not unRaid)

     

    As to the solution, you will need to delete your docker.img file and recreate it again.  You can then reinstall your docker apps through Community Application's Previous Apps section or via the my* templates.  Your apps will be back the exact same way as before, with no adjustment of the volume mappings, ports, etc required.

     

    This is a one-time operation.

     

    Could we put some more details in about where to find the docker.img file I need to delete, and a quick step by step on how to reinstall it afterward (or at least a link to more detailed instructions)? I am barely managing to remember how to work a unix commandline from back in my uni days, and sometimes it feels like my unRAID server is held together with nothing more than hope. Not sure where to look to find docker.img - I had a quick poke around in /var/lib/docker but couldn't find it.

×
×
  • Create New...