Jump to content

itimpi

Moderators
  • Posts

    20,779
  • Joined

  • Last visited

  • Days Won

    57

Everything posted by itimpi

  1. Ok. I was a bit thrown off when you mentioned automated checks and the Assistant. With the Assistant you are by definition manually running the check over a specified range. At the moment I am not sure checks run by the assistant end up in the history in a sensible form. Do you think there should be a configuration setting in the Assistant to have partial checks properly in the history and labelled as such with a correct exit status for that check?
  2. In that case what is it a % of? With absolute values it is a fixed amount of space on a drive regardless of drive size. I do not see how this can be converted to a % if a share has drives of mixed sizes as a fixed amount converts to a different % on different drives.
  3. Thanks for the confirmation. I will work on getting the fix out soon. just for interest is there any reason you do it this way rather than simply setting up increments to automatically run the checks (probably in off hours) with no further manual involvement. I had not really envisaged users using the Assistant in this way as automated increments seemed easier and less effort.
  4. I see where it says %, but since it is talking about drives I assume that is % of drive size? I think this is something that has often been requested. I CAN however make a change and set a specific value such as 10GB without problems.
  5. This where the logic breaks down. The parity mechanism has no understanding of files and does not know that a particular file is corrupt, let alone on what sectors on the drive this might involve.
  6. It sometimes helps to download the zip file for the release from the unraid site and then extract all the bz* type files and overwrite the copies on the flash drive.
  7. Note that if you have 1 data drive = 1 parity in the main array this is a special case where the drives are mirrored, but that would not be the case if later additional data drives were added. Also, it is the Unraid specific handling of parity - not traditional RAID1. It is better to think of 1 parity drive being able to handle 1 physical drive failing. if you want true RAID1 then you can instead set up a 2 drive pool (using BTRFS or ZFS). if a drive in the main array fails then Unraid will continue to operate as if it was still present, using the combination of the parity drives and any non-failed data drives. running without parity is a perfectly viable strategy if this is a backup server. It just means that if a drive fails then there might be a big copying task required to get the backup server up-to-date again.
  8. Assuming the Nic is working this suggests a problem with either the network cabling or the router. Are sure you have the network cable plugged into the right port and it is properly seated?
  9. There is no SMART information for the cache drive so it looks like it has dropped offline. That would explain the appdata share disappearing it it has all been moved to cache. I would carefully check cables (power and SATA) to the drive. Then power cycle the server and post new diagnostics when it comes back. I notice that the ‘system’ share is also set to be moved to the cache but currently has files on disk1. If you want this to be moved then the docker and VM services need to be disabled when you run mover as the services keep files open which prevents them moving.
  10. ... or depending on the file type you might be able to obtain a new copy from an online source. That is why each user needs to decide for themselves how important any particular piece of data is, and how they would handle it being corrupted or lost. At the very least anything important must be replicated somewhere - either in a local backup, offsite backup or an online backup (ideally all of these)
  11. Can you please confirm that you were trying to run something via the Parity Check Tuning Assistant? That line is meant to be a call to sub-routine that can help with debugging issues and the subroutine is not being found. Since it is only a debugging aid it was not necessarily showing up in functional testing, but still needs to be fixed. If you can provide confirmation that you were using the Assistant then that will mean I can make and test the fix that needs doing.
  12. Was the drive marked as disabled (red ‘x’) when you tried the repair? Trying to determine if you were running the repair against the phydical dtive or the emulated one. If so, then it is worth stopping the array; unassigning the drive; restart the array so the drive is now emulated. If it still shows unmountable then you can try running the repair on the emulated drive to see if that repairs better; If the drive DOES now mount when starting in normal mode (or the repair succeeded) you can examine its contents to see if it looks OK as that would be what you end up with if you now rebuild. You can optionally also try running XFS Explorer against the current physical drive. Note that XFS Explorer is not free if you want to use the repair/recovery features, but you can see what it would find using the free option.
  13. How did you run the XFS repair - from the GUI or the command line? If the command line exactly what command did you use as getting it wrong can produce the symptoms you described. If you get an Unmountable drive showing before you start a rebuild then then this is what you will get after a rebuild. Running XFS repair was the right thing to do to fix an 'unmountable' status , but when it failed you should have stopped and asked for advice as to the best way forward. It is a shame you formatted the drive outside the array as that has reduced the chance of getting any data off it by mounting it outside the array using Unassigned Devices, although you may find recovery software such as XFS Explorer on Windows may still be able to get data off it.
  14. Yes. It really only adds value if you have XFS (or RieserFS) formatted drives in the array.
  15. In terms of having them as array drives then ZFS and BTRFS offer very similar capabilities. in protecting against bitrot. An unresolved question at the moment is whether ZFS or BTRFS will prove more resilient in such a configuration if hardware problems are encountered, and if performance is the same (I expect this is likely).
  16. Looking at your diagnostics it looks as if all your drives are showing 100% used so that is probably the cause of the problem.
  17. It is legacy that there is a requirement of at least 1 drive in the array, but there is nothing stopping it being a small flash drive. It is intended that this restriction will be removed in a future release.
  18. You are meant to create the data set from the Share tab. That is also where you specify the primary & secondary locations for the data.
  19. Disk2 in the array. When the syslog contains entries like: Jun 20 20:50:11 Felix-Server kernel: md: disk2 read error, sector=271816 Jun 20 20:50:11 Felix-Server kernel: md: disk2 read error, sector=271824 Jun 20 20:50:11 Felix-Server kernel: md: disk2 read error, sector=271832 Jun 20 20:50:11 Felix-Server kernel: md: disk2 read error, sector=271840 this always refers to disk2 in the Unraid GUI as Unraid does not care how the drives are physically located.
  20. Those diagnostics are full of read errors on disk2 and since there is no SMART report for that drive it has probably dropped offline.
  21. Unraid finds files for read purposes that might be part of a share regardless of where they are located on the main array or a pool. The Primary storage is where NEW files are placed.
  22. Why do you even want a cache drive on the backup server? Mover is going to be much slower at moving files to the array than rsync would be in writing files to the cache so the backup server would need to be powered on for significantly longer than when you run without the cache and write directly to the array. Mover is designed to run in idle time when the server is powered on but not otherwise doing much. From your description this is not how your backup server is expected to run.
  23. It is still a requirement that you have at least one drive in the array so leave that alone. You also want to change the share settings so that it has the ZFS pool as the primary storage with nothing as secondary storage if you want things to work correctly, and have no reference to the share (not even a symlink) on the array.
×
×
  • Create New...