Jump to content

itimpi

Moderators
  • Posts

    20,703
  • Joined

  • Last visited

  • Days Won

    56

Everything posted by itimpi

  1. Ok, but you only got those messages because you had the debug logging level active. If that had not been the case none of the messages would have appeared. in terms of the array behaviour you want this is something that is way beyond what this plugin can achieve and it even may be beyond what Limetech could achieve in any realistic manner.
  2. you should have been notified that the disk had been disabled as long as you have notifications enabled - did this happen? Just a FYI the Paroty Tuning plugin will not firectly cause an issue is such a scenario as happened as it will only attempt to pause or resume an operation that was already in progress. In fact in the log snippet posted it was idling and taking no action. thinking about I could consider adding a feature to the parity tuning plugin where it will halt any array operation if an array disk becomes disabled while the array operation is running. I would be interested to hear if anyone thought this might be of use? If so should the operation be cancelled or merely paused indefinitely so that the user can cancel it. Feedback is welcomed. This would be quite easy to implement once I had a clear idea of exactly what is wanted.
  3. Many people use cache disk/pools purely for application purposes and do not bother with the original use of caching writes for files that end up on the array. the 6.9.0 release supports multiple cache pools and how each pool is to be used is configured by the user so you have complete flexibility.
  4. I think most people take the easy way out and simply change the share to Use Cache=Yes and let mover handle getting the file onto the array when it runs at a later point in time. A 'benefit' of the mv behaviour you describe is that from a user perspective it completes almost instantly whereas a copy/delete takes much longer and the user does not see the time that mover later takes to get the file onto the array as it typically happens outside prime time. You DO get the behaviour you want if it is done by accessing the shares over the network - it is only moving them locally from within the server that exhibits this behaviour.
  5. This is a by-product of the way that the underlying Linux system implements move. It first tries to do a ‘rename’ if it thinks source and target are on the same Mount point and only if that fails does it do a copy/delete. In this case both appear to Linux to be under ‘mnt/user’, and so it tries rename which has worked and the file is left on the cache. In such a case you either need to have the target set to Use Cache=Yes so that mover later moves it to the array, or do an explicit copy/delete yourself.
  6. No - the disk should not have become disabled unless Unraid detected a (write) failure so that is not a good sign. I suggest posting the system’s diagnostics zip file so we can see the current state and what lead up to it.
  7. As far as Unraid is concerned a format is just a normal write operation so parity is automatically updated as it is run so you would have been fine. At the level at which parity runs it is not aware of file systems and their type - just of physical sectors on the disks.
  8. Just to make sure, In step 7 you will have to do a New Config again keeping all current assignments and then add disk1 back before starting the array and rebuilding parity. If you simply add it back without going through the New Config step Unraid would promptly start to clear it (writing zeroes) to maintain parity when you start the array thus zapping the data you had just copied. An alternative approach that bypasses using New Config would have been carry out the format change at step 4 by stopping the array; changing the disk1 format to xFS; starting the array; formatting disk1 which would now show as unmountable and available to be formatted (to XFS) and then simply copied the data back to disk1 which would now be in XFS format. The advantage of this approach is that the array would remain in a protected state throughout.
  9. This is quite normal. You need to provide the -L option for the repair to proceed. Despite the scary sounding warning message data loss does not normally occur, and when it does only a file that was actively being written at the point things went wrong is affected.
  10. All attached storage devices (other than the Unraid boot USB drive count) even if they are not being used by Unraid. In your case that would be 1 + 2 + 8 = 11 devices.
  11. The default is RAID1 which means the available space is equal to the SMALLER of 2 dissimilar size disks. It is a known issue that when the disks are of different sizes that BTRFS tends to report space incorrectly.
  12. The rebuild does not check the existing contents. It just works out what should be there by reading all the other disks, and then overwrites whatever is on the disk being rebuilt.
  13. I can only suggest that you post your diagnostics again. If the drive is disabled after the rebuild process then that suggests that a write to it failed during the rebuild process. There might be something in the diagnostics to give a clue as to what exactly happened.
  14. I have never heard of anyone actually digging into the code under /usr/src. I always assumed it was there primarily to satisfy GPL legal requirements rather than expecting anyone to spend time digging into it (although you are of course entitled to dig as much as you want). Be interested to see if anyone can give you the pointer you want.
  15. Not sure why you interpreted my reply that way? I did not know if you had looked there and not found those source files of any use. What is NOT publically available is the source to the emhttp daemon. It is possible that is the part that handles recognising disks - I do not know.
  16. Have you tried looking under /usr/src on your Unraid server?
  17. Not sure what you are trying to ask. Any time you reboot your server any plugins you had installed before starting the reboot are automatically installed as part of the boot process.
  18. Since Unraid recognises drives by their serial number I would expect this to be a requirement. However this is my guess based on how other types of drives are handled - I have not looked into the code of the md driver to confirm this.
  19. You should be able to look at disk2 while the rebuild is in progress. Whatever you see there is what you will end up with when the rebuild completes. starting VMs/dockers should not affect the rebuild but may have a performance impact if they use array drives.
  20. If it had not run against the mdX device it would have invalidated parity which would not be a good idea. What are you rebuilding? If it is the disabled disk you will end up with whatever showed on the emulated drive before the rebuild.
  21. xfs_repair will not stop a disk being disabled - it is intended to fix it being unmountable. If the drive is disabled then it is the emulated disk that is being fixed. The standard way to clear the disabled state is to rebuild the disk.
  22. SMB1 is only enabled I believe if you configure it to be so by enabling NetBios support. The help for that setting explains this. Maybe a more emphatic warning might be given?
  23. /the commonest cause of this type of symptom is the flash drive has dropped offline for some reason. We would need the system diagnostics zip file to confirm that.
  24. The shutdown option is now available in the version of the Parity Check Tuning plugin i released today. I would be interested in any feedback on how I have implemented it or any issues found trying to use the shutdown feature.
  25. If you have not already done it you might also want to try power-cycling the server in case the GPU has got into a state that means it needs to be restarted from cold. in principle Unraid always unpacks itself afresh from the archives on the flash drive so that there should be no remnant of using GUI mode when you reboot after a power-cycle.
×
×
  • Create New...