threesquared

Members
  • Posts

    7
  • Joined

  • Last visited

Posts posted by threesquared

  1. Just a quick update incase anyone reads this. I swapped out the motherboard/cpu tray and also removed the cache drive. I re-ran the parity check which found 20455 more errors but after that run I ran it again and it came back with zero errors this time. Hopefully the issue is sorted now and was most likely hardware related.

  2. Ok I will see if swapping out some hardware makes any difference. I think the first parity check definitely had more errors but I couldn't be exactly sure of how many more. Is there any way to work out what disk those sectors starting at 4194400 are on?

     

    Edit: So actually the first check returned 392611 errors and second one was 59073 errors

  3. Thanks for clarifying. I am using a HP N36L microserver with 16GB of ECC RAM at the moment. As luck has it I just got a N54L and was planning on moving everything over to that motherboard to upgrade the CPU. I suppose I will give that a go and see if the errors go away then.

     

    I assume that with that many errors I can safely assume I am not protected by parity in case of a disk failure? It is odd as I never had any issues when running a 5 x 2TB ZFS RAIDZ pool on the same hardware and I resilvered the array due to a disk failure not that long ago and didn't experience any data loss that I noticed...

     

    Thanks again for your help.

  4. So I recently migrated from FreeNAS using two new 10TB shucked WD white label drives, 3 x 2TD WD Red drives and an old 120GB SSD from my last setup. Everything seemed to have worked fine until I performed the first scheduled parity check which returned something like several tens of thousands of sync errors... I ran a full SMART check on the parity drive and there was no issue reported. I have just run another parity check and this time it came back with 59073 errors. 

     

    I know the SSD drive I am using for cache has some SMART errors reported but I thought the cache drive was not part of the parity? I have also been using the /mnt/cache folder directly so not sure if that could also be an issue? The next step I was going to take was to remove the cache drive entirely for now and see if the errors go away. Is there anything else I can do to narrow down the cause of these parity issues? I have attached my diagnostics file.

     

    Thanks!

    illmatic-diagnostics-20191204-0939.zip

  5. Just incase anyone comes across this, what I ended up doing was:

     

    Stoped and backed up my VMs in FreeNAS and replaced the SSD cache drive with one of the new 10TB disks.
    Create a new zpool called transfer with just the single new 10TB disk.
    Created a ZFS snapshot of my main pool and used send/receive to copy it to the new disk:

    zfs snap pool@migrate
    zfs send pool@migrate | pv | zfs recv -F transfer

    That took about 31hrs to copy 6.6TB.
    Then I stoped everything accessing my pool to create a final snapshot and updated the new disk with it incrementally:

    zfs snap pool@incremental
    zfs send -i migrate pool@incremental | pv | zfs recv transfer

    Then I was able to reboot into unraid and install the ZFS plugin to import and mount my transfer pool:

    zpool import -f transfer

    I then created my unraid array with the other 10TB disk and 3 of the remaining 2TB disks without parity.
    Finally I used rsync to move all the data from that disk into my new array:

    rsync -aHP /transfer/ /mnt/user/RAIDZ/

    NB: I used the NerdPack plugin to install screen so I could run rsync detached in the background, I also added the -H flag to preserve some hard links I had in my data.

     

    The rsync transfer is still running but once it completes I plan to format the transfer 10TB disk and add it as a parity drive to the array.

    I already miss FreeNAS and ZFS in particular, but I need the flexibility to create an array with odd sized disks and unraid seems to do everything I need so far.

  6. So I am planning on moving from FreeNAS to UNRAID, primarily to take advantage of the flexibility of increasing the size of the array.

    My current setup is 5 x 2TB drives in RAIDZ and a single SSD as system/docker drive

     

    My plan is to buy two 10TB drives but I am not sure about the best way to approach the data migration as I only have 6 available SATA ports in my HP microserver.

     

    From what I have read I could remove the SSD and attach a single new 10TB drive then either:

     

    1) Copy the array data to the new 10TB migration drive in freenas, reboot and initialise UNRAID with 1 10TB and 4 2TB disks and then copy data into the new array then replace a 2TB disk with that 10TB migration disk

    or

    2) Init UNRAID on a single 10TB drive, use the zfs plugin to mount the pool disks, copy data to the single drive then add the rest of the other disks to unraid?

     

    I am leaning towards number one, but does anyone know if there is a better solution or if there would be any issues with the above? Also I am probably going to be buying WD MyBooks so could possibly use the fact they have USB connection but I don't think the microserver has USB3 so not sure if the transfer rate would be so slow it would be useless?

     

    Any advice is appreciated!