Jump to content

itimpi

Moderators
  • Posts

    20,775
  • Joined

  • Last visited

  • Days Won

    57

Everything posted by itimpi

  1. When you set up multiple drives in a cache pool it will default to RAID1 if using btrfs. You can dynamically change the profile used by a btrfs pool while it is running if you want some other profile.
  2. Should be easy enough to check all .cfg files are normally text files and thus human readable.
  3. Are you sure your power supply is up to handling the load when a parity check is running? The other thing to check is that you do not have a thermal issue so that the CPU overheating shuts down the server.
  4. There is a good write up on how parity works in the Overview section of the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page.
  5. Parity does not backup your shares - it merely provides the mechanism that means if an array drive fails then its contents can be reconstructed. You still need backups of any important/critical data. in terms of shares showing some files are unprotected that is normal if the share in question has files on a pool (cache) and that pool is not redundant.
  6. You should it via the GUI as doing it from the CLI is error prone.
  7. This implies to me that in the first case you are looking in the ‘data’ share, whereas the other one is looking in the ‘blue-ray’ share? Shares are effectively the aggregate of any top level folder with a given name across all array drives and pools so you should not need to do anything if the files are in the correct location on each drive. if you think it all files are in the correct location then it can be worthwhile to run a filesystem check across all your drives as corruption at the filesystem level has been known to upset user shares being visible.
  8. The cache drive is not showing up, but since you rebooted before taking diagnostics we cannot see what lead up to that. Your best chance at this point is to power-cycle the server to see if the cache drive becomes visible again. If not then it has probably failed.
  9. @Jimmeh I notice that you have your regular scheduled check to be a correcting one. It is normally recommended that this be non-correcting so that a drive that is acting up will not end up corrupting parity. You normally only want to run correcting checks manually when you are reasonably certain you have no outstanding hardware issues with any drives.
  10. The parity check tuning plugin will get the correct times in the final report and the history record when the operation completes. I believe the problem is that the standard Unraid reporting does not take into account properly the fact that an array operation is running in increments.
  11. Have you tried doing a file system check on disk?
  12. @Jimmeh I think I have tracked down why you were getting that 255 error in the syslog (and the operation not resuming) and am testing my fix. Hopefully the two files you sent me will allow me to track down why the history entry is going wrong. I can see from the .save file that the generated record is wrong and should have had ‘check P’ in the operation type field. If you want you can edit the entries in the parity-checks.log file to have ‘check P’ instead of ‘recon D5’ to get them displayed correctly.
  13. I am reworking the code that handles pause/resume around mover/CA backup running, so hopefully that will either fix the issue of resume not working, or if not at least give some insight as to why. As to the history giving strange results can you look at the config/parity-check.log file on the flash drive (it is a human readable text file) to see what the last few entries show (and possible let me have a copy) so I can determine if the problem is something that has crept in around displaying the history, or if it is actually an issue recording it in that file (which it is should be obvious). If the latter I would appreciate a a copy of the parity.tuning.progress.save file from the plugins folder on the flash drive as that would have been used to generate the last history entry.
  14. Nothing that can explain the error message occurring, but I will look again to see if I can replicate it in any way. what they DO show is the array operation being paused due to CA Backup running and not being resumed when it is detected that it is no longer running. The diagnostics also give me a copy of your current plugin settings so I can use those for testing. I will add some additional logic to see if I can detect why the resume is not happening as the plugin has detected that the CA backup completed and a resume is required, but it is not actually issuing it. Maybe if I release an updated plugin for this your other issue might disappear as well
  15. CRC errors are rarely the drive - the cabling is nearly always the culprit. Do not forget - it can also be the power cabling (or even the PSU itself) and not just the SATA cabling.
  16. The process is covered here in the online documentation and the array will be accessible (albeit with degraded performance) during the process. Not quite sure why you HAVE to do this - there is no requirement that with dual parity both parity drives are the same size - just that neither can be smaller than the largest data drive.
  17. Might it not be better to start without 'domains' share set to move to the cache? If that share DOES contain VM images (that is quite normal) that will not fit onto the cache not much point in trying to move them, and it is then better to think about where you want to them to end up. Even if they do currently fit but they are sparse files then they will almost certainly later grow to exceed the cache size causing problems at that point. It is also a good idea to set the Minimum Free Space for the 'cache' pool to be larger than the biggest file to be cached to help avoiding the cache ever getting completely full.
  18. It looks as if your docket.img has been corrupted due to the cache getting completely full. BTRFS systems are prone to corruption if they run out of free space. You should set a Minimum Free Space on your pools to set the point when they should start overflowing to the array rather than continue filling up the cache drive and causing problems. Once some space is available on the cache pool you should recreate your docker image file as described here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page.
  19. You do not give any indication of how you failed? The standard process for recreating the docker image file and reloading applications is covered here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page.
  20. If you use a custom network then I believe that port mapping does not apply and you can only use the ports built into the container.
  21. I think it is planned that you will get that (in 6.13 or later) when the distinction between the current main array and pools is meant to be removed (the current array type will become another pool type).
  22. You should restart the array in normal mode and then get new diagnostics so we can see if the problem drives now mount OK.
  23. If you keep the file system type the same the data will still be there on the drive (which is now a cache drive). If the relevant shares are set to Use Cache=Yes and pointed at that cache drive mover will subsequently transfer the data to the array.
  24. The process is covered here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page.
  25. If you keep the same file system type then it should be possible to assign the SSD as a pool device (which you can then use for caching purposes) on the new system and it’s data will be retained. This would not be the case if you wanted to change the file system type as that resulted formatting the drive erasing any current contents.
×
×
  • Create New...