Jump to content

itimpi

Moderators
  • Posts

    20,699
  • Joined

  • Last visited

  • Days Won

    56

Everything posted by itimpi

  1. You should look at the Help for the Use Cache setting for a share. The value Yes does not do what you seem to think as although data is initially written to the share that value means later move it the main array. You almost certainly want it to be Only for your Use Case.
  2. I have never seen that particular error message before so do not know what it means. I think you are going to need to let it scan the disk to see if it can find a valid superblock (which can take hours on a large disk). Maybe someone else will have a suggestion?
  3. Did you do this from the command line? If so what is the exact command that you used?
  4. If UFS Explorer can not do the job then I think it is unlikely there is anything else that will work.
  5. Have you disabled the Docker and VM services? If they are left running they will have files open that mover then cannot move. in addition if files already exist on the array, then the files will not be moved off the cache. This should not happen if everything is working correctly but has bern known to happen if at some time in the past some problem lead to the cache being offline so that files got created on the array instead.
  6. Yes, stop the array and restart in normal mode. The status does not get changed until the system next tries to mount the drive and I would expect the disk to now mount OK.
  7. You need to rerun the check removing the -n (no modify flag) so that fixing is allowed and add the -L flag.
  8. it would have been created the same way (probably with a name of ‘cache’). That means the physical path is ‘mnt/cache’ which is equivalent to the ‘mnt/uhd’ physical path for the new pool. Both pools are part of the User Shares which appear at /mnt/user/share-name where ‘share-name’ will correspond to a,top-level,folder on the appropriate physical drives/pool. It is important to realise that User Shares are just a logical view so it is normal for the same file to appear both under its share name and under its physical location.
  9. The easiest thing to do is NOT have the Domains share set to Prefer but to "Only" or "No which will stop mover taking any action on the share. Then manually move the vdisk files for the VMs you want on the cache to that location. It is also not mandated that vdisk files HAVE to be in the Domains share - that is just the default. You then handle backing up any vdisk files on the cache (or wherever you have placed them) as needed using either your own backup script or the VM Backup plugin. Note that for existing vdisk files they will be found on cache/array regardless of the setting - in such cases the setting just determines where NEW files get created.
  10. how full a drive is will have no effect on a parity check as the parity check works at the raw sector level and is unaware of the meaning of the contents of the sectors - just that they have a bit pattern it is going to use/check.
  11. i was thinking through what you said and you might get away with just the rebuild! after the rebuild completes the contents of the rebuilt data drive will agree with what parity plus all the other data drives contain, but the contents are likely to be badly corrupt at best. However since you said you were not worried about preserving the drives contents, if you now follow the procedure for Reformatting a drive the fact that the contents may currently be invalid is not relevant as the format operation will create a new empty file system and update parity accordingly. Note that this is a rather a special case where you are definitely going to discard the contents of the rebuilt drive.
  12. No - UD is hot-plug aware as long as your hardware supports it. That should always be true for USB devices and is normally true for SATA/SAS ones.
  13. No. You do not at this point know how (if at all) parity might have been corrupted by the bad disk). Rebuilding the data drive just assumes that the parity is valid and that all the other disks are fine. It is highly likely that the rebuild is going to result in a badly corrupt file system. Your scenario would work if the drive being rebuilt now failed, but not if a different one failed. As I said the only way you can be confident that the parity is valid for the complete disk set is to run a parity check anyway so why not do it from the outset by rebuilding parity.
  14. I do not think that it is safe to assume that the current parity is valid after that level of error so you are going to almost certainly need to do a correcting parity check anyway after the rebuild. I would go directly for rebuilding parity from scratch as at least that way (as long as it completes without error) you know it matches the current drive set.
  15. The warning is because many people were putting packages there that might break a future unRaid release and forgetting they had done so and then wondered why upgrades went wrong due to a package being incompatible with the upgrade. if the package is available in NerdPack (or DevPack) then that is preferred as it,is regularly updated to install the correct versions of packages for any specific unRaid release. However if it is not available in NerdPack then your only easy option for installing it is to continue doing it via ‘boot/extra’ but you might want to raise a request in the support thread for NerdPack to see if your desired package can be considered for inclusion going forward.
  16. Changing drive assignments after a New Config is exactly the same process for both data and parity drives. You can change them to be different drives or leave the entry unassigned. it is perfectly permissible to have gaps in the data drive assignments so whether you move them up because you prefer it that way or leave a gap is up to you. the important point is to make sure that any drive assigned to a parity slot does not contain contents you want to keep as when you start the array to build new parity (based on the drive assignments at that point) any data on the parity drives would be lost as it is over-written with parity information.
  17. That drive is very sick -as each Pending sector indicates a sector that can not be read reliably (and can thus result in the corresponding sector on the parity drive potentially having the wrong contents). Reallocated sectors while not necessarily a problem if they are stable are a big warning sign if the number is not small,. with that drive in the system I would not assume that the contents of the parity drive are valid enough so that parity plus remaining drives can rebuild any failed drive without serious file system corruption on the rebuilt drive. Since you say the content of that drive is unimportant I would suggest doing Tools -> New Config; and select the options to retain all current settings return to the Main tab and change the problem drive slot to its replacement rebuild parity with the new drive set. Hopefully this time it will build without drive level errors so it can be assumed valid. You can then format the replacement drive to create an empty file system on it so it is ready to receive data.
  18. That will work but actually since the parity swap procedure first works on the parity drive first and then only starts on the data drive when that completes it automatically starts rebuilding the data drive so does not really save much time over doing it in 2 steps. I would have kept them separate just so I had points at which I could check that the previous step was OK.
  19. This is already possible in the 6.9.x releases as long as the code that generates the notification message provides the correct URL information as a parameter to the call to create the message. As such it is up to authors to update their code to utilise this feature.
  20. You are going to have to do the changes in 3 steps (starting with the parity drive) as with single parity you can only do 1 disk at a time. Preclearing the disk(s) is not required unless you want to run an initial stress test before committing them to the unRaid array. Also you cannot run preclear against a disk that is already part of the array.
  21. this is mentioned in the release notes as something you have to do if you roll back to 6.8.3 from a 6.9.x release. I wish the unRaid upgrade process would display the release notes (with an option to abort the upgrade) before actually doing it. At the moment it is too easy to upgrade without even glancing at the release notes to look for potential pitfalls.
  22. The correct command would have been xfs_repair -Lv /dev/mdX where X is the disk number and the md device both handles the partition number for you and maintains parity as corrections are made. It is better to run the command from the GUI by clicking on the drive on the Main tab as it handles the device name for you. If running against the raw device the command is xfs_repair -Lv /dev/sdb1 Note the additional '1' on the end to specify the partition. However doing it this way does not update parity as corrections are made so you then need to run a correcting parity check to get parity to be correct.
  23. Edit the title on the first post in the thread to add "(SOLVED)".
  24. Just realized - I hope you did not do this exact command from the command line? If you did it via the GUI it would have (correctly run the command against the appropriate mdX device which maintains parity. Running directly against a sdX type device invalidates parity, and you have to add the partition number on the end
  25. Stop the array and restart it in normal mode. The drive should now mount OK.
×
×
  • Create New...