AcerbicMaelin

Members
  • Posts

    17
  • Joined

  • Last visited

Everything posted by AcerbicMaelin

  1. Is it safe to close the browser tab when running Docker updates? I've been leaving it open and waiting because I'm nervous about breaking everything by killing it mid-update, but wondering if this concern is unnecessary.
  2. Aha! That seemed to work, so I stopped the array and did another new config, and it's recognised and mounted the drive, files are visible, and now it's doing a parity sync. Fingers crossed!
  3. Okay, I tried that (using "/dev/md1", since it's disk1) and got the following error message: Is it safe to destroy the log? Is there a way to see what's in the log first?
  4. Diagnostics attached. I guess worst case I can just reformat it as a drive in the array, and copy everything over a second time using the method you suggested, but the copy takes almost two days, so it'd be nice to avoid that... tower-diagnostics-20201027-0219.zip
  5. When I look in the Plugins screen it appears I have version 2020.10.24, and there is an update for 2020.10.25 now available. Unassigned Devices Plus is version 2020.05.22.
  6. Okay! Strange hiccup. I finished copying all the files over to the 8TB drive using rsync ssh'd into the server, can browse around in that 8TB disk while it is mounted with Unassigned Devices as /mnt/disks/WDC_WD80EFAX-blahblah and see all the files Unassigned Devices shows that disk as formated with xfs, and so does "df -T" (listed as /dev/sdb1) Stopped the array, went to Tools>New Config, Preserve Current Assignments: All, Yes I Want To Do This, Apply, Done Went to Main tab to configure the array, all disks showing the 'blue' icon. Selected dropdown for Disk 1 which was unassigned, set it to WDC_WD80EFAX-blahblah Press Start to start the array, "Parity disk content will be overwritten", Proceed But now in the array devices section it reports disk 1 as being "Unmountable: No file system", and down the bottom in the Array Operation it says "Unmountable disk present: Disk 1 - WDC_WD80EFAX-blahblah" and offers to format the disk. I tried again, but this time when I was assigning Disk 1 to be the WDC-WD80EFAX-blahblah, I clicked on the disk and manually set its file system to 'xfs' instead of 'auto', but it still gives the same messages. Any ideas?
  7. Ahhhhh that makes sense. Well, useful to know next time I need to do some mucking around with file systems, not that I have any reiserfs drives left now
  8. Thanks for the heads up! I *probably* would have figured out the new config thing myself but good to have a warning. I think the reason I didn't just change the disk1 format was that the disk wasn't cleared and had data on it, and I wasn't confident that just telling the server "this is disk1, it's xfs now, yes format it that way" when it already contained data wouldn't invalidate my parity and cause potential problems. But maybe I'm wrong about that?
  9. Okay, so far so good: the 8TB drive finished rebuilding I mounted the old 4 TB reiserfs drive with the USB dock using Unassigned Devices (/mnt/disks/USB3.0_Generic_USB_Device/) Copied everything from the 8 TB to the 4 TB using "rsync -avPX /mnt/disk1/ /mnt/disks/USB3.0_blahblah/" (this only took a couple of minutes, since the drives contained mostly the same data) Made a 'new config', removed disk1 from the array, started the array in maintenance mode so the array drives didn't start, used Destructive Mode in Unassigned Devices to format the 8TB drive as xfs Mounted the USB 4TB drive and the 8TB drive as Unassigned Devices In a tmux session, used "rsync -avPX /mnt/disks/USB3.0_blahblah /mnt/disks/WDCblahblah" to copy all the files from the 4TB in the USB dock back to the 8TB (currently unassigned) drive. This is the step that is currently progressing. When it finishes, I will add the 8TB back into the array as disk1, and will rebuild parity (I will very deliberately not say that parity is valid) I chose not to do the parity rebuild with the freshly-formatted 8TB in the array before copying the data back to the disk, because that would result in making two parity runs - once for the rebuild with the formatted 8TB, and a second time while copying the data from the 4TB. I'm aware of the slight risk of having the array unprotected in the meantime, but nothing on this system is really irreplaceable, and the data will still be on the 4TB if something goes wrong. The array disks are currently unmounted (array is in maintenance mode) so nothing can interfere with them.
  10. I don't currently know how to use the USB dock as an Unassigned Device (or what the consequences of that are), so I'm not sure. Reading through the mirroring with rsync procedure on https://wiki.unraid.net/File_System_Conversion#Mirroring_procedure_to_convert_drives, it seems like if the amount of data on the 8TB drive is small enough to fit on a 4TB (I'm pretty confident it is), then it'll work. So, my tenuous current plan, noting that AFAICT there shouldn't be anything that is writing to the array during this process, is: Wait for the reiserfs rebuild to finish on the 8 TB drive Figure out how to attach the 4 TB reiserfs drive with the USB dock Figure out how to copy the data from the 8 TB drive to the 4 TB drive in the USB dock (rsync?). The 4 TB reiserfs contains mostly the same data as the 8 TB (except for anything that got written in the brief period when I started the array but hadn't yet disabled the dockers), and I vaguely got the impression that rsync has an option to be smart about that Reformat the 8 TB drive to xfs Figure out how to copy the data from the 4 TB drive in the USB dock back to the 8 TB drive (rsync?) Remove the 4 TB drive in the USB dock If anyone can point out any problems with this, or help flesh out the steps I'm unsure about, I'd be super appreciative.
  11. There aren't any backups of the array, but there's nothing on it that's really irreplaceable, so that doesn't matter. If the 8 TB drive only contains slightly less than 4 TB of data, will the spare 4 TB drive suffice to hold the data temporarily while it reformats?
  12. I realise that now, but when I first started the process it didn't occur to me, and it also didn't occur to me that starting the rebuild on the 8TB disk would immediately invalidate the 4TB disk that it was replacing. So: we've turned off Docker, and there will be no writes to the array, and I've just started the rebuild with the 8TB disk, which is now using reiserfs. Once the rebuild is finished, we will have: an 8TB parity disk an 8TB reiserfs data, containing slightly less than 4TB of data 2x 4TB xfs data disks, each containing slightly less than 4TB of data a USB sata dock (probably two, actually, if the old one I've got still works) a spare 4TB disk and probably a couple of other, smaller spare disks lying around no spare HDD slots in the server (it's an HP proliant microserver, just in case that's relevant) Is converting the 8TB reiserfs data disk to xfs viable with this situation?
  13. I'm not confident that nothing was written - we have some dockers that might have been downloading stuff or updating their internal databases or something during the minutes while the rebuild was happening, before I stopped it. Doing the new config with 'parity is already valid' seems like it might potentially corrupt things that could be a nightmare to fix. It's 4:30am now so I'll leave the thing offline and discuss with the housemate tomorrow. Maybe we just turn off any services or other stuff that could write to the drive, finish the rebuild with reiserfs on the 8TB, then use the 4TB (and some other spare hard drives we've got lying around) in the USB dock to copy the files off the 8TB, format it to xfs, then copy the files back on...
  14. The oldest drive was the one on reiserfs, so that's the one I was upgrading. I figured "why not also convert the file system at the same time" but I didn't realise that swapping the drives and starting the rebuild would mean missing my chance, only realised that after I'd already started the rebuild. (edit to clarify: I didn't know reiser was bad for 8TBs until after this debacle had already started :P) The last parity check was yesterday, in preparation for the drive swap. No errors.
  15. The drives are very full (only about 60GB left each on the three 4TB data drives). I'm not fussed if things take a while, but I'm concerned that my housemate might have things going that will be trying to write to the array, and I don't want to start the rebuild now and wake up tomorrow to find it now contains 4.5GB worth of stuff and the problem is now ten times harder. I've left the whole array offline at the moment and will talk to him tomorrow so we can make sure the array isn't being written to while we do it, but as a temporary measure to get everything back to the last working state, is it at least possible to go back to the 4TB drive, now that unraid thinks Disk 1 is supposed to be 8TB? (At least the good news is that only one of the drives is reiserfs! The other two are xfs)
  16. Hey gang, I think I've botched my opportunity to upgrade. I have a small server with 4 drives - three 4TBs and an 8TB parity. Note that the server only has four drive bays, but I have a USB sata dock (if that helps). I finished a parity check yesterday, with no errors. I was planning to swap out one of the 4TB drives for a new 8TB drive I just bought, so I stopped the array, shut down, pulled the oldest drive, swapped in the new one, reassigned it, and started the array. It started rebuilding but the oldest drive was on reiserfs, and I decided to take this opportunity to figure out how to convert it to xfs. So I stopped the rebuild. Mounting and unmounting was taking a long time (~15-20 mins), so before doing anything else, I decided to plug the old 4TB drive back in and get back to the original working state. But when I stopped and swapped the 4TB drive back in, it now says it can't use that as the replacement drive is too small - presumably because it now expects that disk to be 8TB (even though it hadn't finished rebuilding that disk). I guess I could put the 8TB drive back in and finish the rebuild and that will at least get everything working, but if I do that, there goes my opportunity to convert to xfs, at least until I upgrade the next 4TB drive to another 8TB. Is the situation salvageable with the file system conversion? Or did I miss my chance and I'll just have to wait until I get the next 8TB?
  17. Regarding this post in the FAQ (I'm getting this error trying to update my Plex docker app, using the linuxserver/plex image): Could we put some more details in about where to find the docker.img file I need to delete, and a quick step by step on how to reinstall it afterward (or at least a link to more detailed instructions)? I am barely managing to remember how to work a unix commandline from back in my uni days, and sometimes it feels like my unRAID server is held together with nothing more than hope. Not sure where to look to find docker.img - I had a quick poke around in /var/lib/docker but couldn't find it.