• Posts

  • Joined

  • Last visited


  • Gender

lazant's Achievements


Newbie (1/14)



  1. Well I just finished the migration to xfs without any issues! I ended up adding all of my unconverted disks to the global exclude list and then removed them one by one after each cycle through the instructions on the wiki between steps 17 and 18. The files on the excluded disks were unavailable for a short time but I didn't need to worry about accidentally changing the source disk during the rsync copy in step 8. Thanks to everyone in this community for their help! I'm truly grateful.
  2. Cheers. That cleared it up. It seems then that as I un-exclude the disks after conversion to xfs, the files will become available again in the share. I'm about halfway through a 3TB copy right now so I'll confirm tomorrow afternoon. Edit: Confirmed. Un-excluding the next disk converted to xfs did make the files on the disk available again.
  3. I looked in Finder on my mac and there are a bunch of folders missing. However, when I tried browsing the the ‘Disk Shares’ from the excluded disks using UnRAID’s browser interface I can see the missing files. Phew! Anything special I need to do? Or just reenable the disks when I’m done with all the fs conversion?
  4. So when I uncheck them from being excluded will all the files on those disks show up in the share again?
  5. Thanks for clarifying. So I'm totally panicked right now. Just opened Plex and noticed a ton of my files are missing. Is this because I switched a bunch of the disks to excluded? I assumed excluded just prevented new files from being written to those disks. Does it also not allow them to be read? Will they show up again when I 'un-exclude' them?
  6. I've been proceeding with the conversion and decided to just add all the remaining rfs drives to the excluded list. This seems to be working well and I don't need to worry about the disks being written to during the rsync copy to the swap disk. One other thing I am curious about is Step 17. Is there a good and QUICK way to check that everything is fine? Should I just pick a few random files and compare checksums?
  7. I'm in the process of a massive upgrade to my Atlas clone. I've already upgraded from 5.0.5 to 6.7.0 and replaced a couple failing drives in the nick of time (I've been really lucky). Now I'm in the process of converting all my disks from rfs to xfs. I've done 2 of 10 so far using the mirroring method described here. As with the original Atlas, I have UnRAID running as a guest on an esxi 5.1.0 host along with an ubuntu guest that runs plex. This ubuntu guest along with a local printer (for scans) and an IP cam are the only users that have access to the UnRAID shares. When I did the first 2 disks I shut down the ubuntu guest, turned off the printer and unplugged the cam to make sure nothing was writing to the array during the rsync operation. Obviously this means I can't use plex during this process. My question is, rather than only adding the swap disk to the excluded disk(s) list under 'global share settings' as instructed to do in the wiki, could I also add the source disk and be assured that nothing is written to the source disk during the rsync operation? I have no VMs or dockers running on UnRAID (yet).
  8. I’m curious why we don’t add BOTH disks to the excluded list to ensure nothing is written to them during the copy? The wiki says to just add the swap disk (disk 11 in the example).
  9. I'm currently at step 15 of the mirroring method process for converting the file system of my first drive (disk10). I started with 10 disks formatted using RFS, added a disk11 and formatted it using XFS and used rsync to copy all data from disk 10 to disk 11. I've swapped the drive assignments of disk10 and disk11 (swap) and clicked on the disk names to swap the file system formats, however, both say "auto" for file system. I just want to make sure I don't mess this up. Should I set disk10 to xfs and disk11 to rfs? Since the other 9 disks are all set to auto, do I need to go through and set them to rfs as described at the end of step 11? Thanks.
  10. Ok so I finally finished preclearing new drives and I’m ready to proceed. I have a 6TB parity 1 drive, 3 TB parity 2 drive, and 10 x 3 TB data drives. Drive 10 is failing and I want to replace it with a 6TB drive and then proceed to converting all the drives to XFS using the “mirroring” method, which requires there to be only one parity drive. I just unassigned the 2nd parity disk, powered down, removed the disk, powered on, tried to assign a precleared 6TB drive to disk 10 and it said invalid configuration and that parity 2 was missing. Is there something else I need to do to have unraid forget about parity 2?
  11. Yea I'll add the 2nd parity after I finish all the mirroring. Thanks for the help.
  12. According to the wiki, the mirroring procedure will break the validity of the 2nd parity disk. So I thought I would just remove it before beginning to avoid any confusion.
  13. Can someone explain how I go about removing my 2nd parity disk before I start the mirroring process for upgrading file systems? Do I just unassign the parity 2 drive? Thanks.
  14. I’m finally getting around to converting file systems on my ancient Atlas build from reiserfs to btrfs or xfs (haven’t decided yet). I currently have 12 disks in my array, 10x 3TB WD Greens and 2x 3TB Seagate Barracudas for parity. One of the data drives is having read errors so I’m going to replace it. I picked up 5x 6TB WD Red Pro’s the other day when Amazon was having a sale. My plan, eventually, is to replace the parity drives with these, add a cache drive and keep a couple precleared and ready to go in case of drive failures. Here’s the order I’m planning to do things in: 1. Replace parity drive 1 with new 6TB Red Pro 2. Unassign parity drive 2 3. Replace data drive having read errors with 6TB Red 4. Use mirroring method to upgrade array to new file system 5. Add 2nd parity back to array 6. Add cache drive Does this order make the most sense? I would also like to eventually convert to running unraid bare metal. As it stands I have it running as one of 4 VMs under esxi.