Jump to content

itimpi

Moderators
  • Content Count

    9821
  • Joined

  • Last visited

  • Days Won

    22

Everything posted by itimpi

  1. This is a by-product of the way that the underlying Linux system implements move. It first tries to do a ‘rename’ if it thinks source and target are on the same Mount point and only if that fails does it do a copy/delete. In this case both appear to Linux to be under ‘mnt/user’, and so it tries rename which has worked and the file is left on the cache. In such a case you either need to have the target set to Use Cache=Yes so that mover later moves it to the array, or do an explicit copy/delete yourself.
  2. No - the disk should not have become disabled unless Unraid detected a (write) failure so that is not a good sign. I suggest posting the system’s diagnostics zip file so we can see the current state and what lead up to it.
  3. As far as Unraid is concerned a format is just a normal write operation so parity is automatically updated as it is run so you would have been fine. At the level at which parity runs it is not aware of file systems and their type - just of physical sectors on the disks.
  4. Just to make sure, In step 7 you will have to do a New Config again keeping all current assignments and then add disk1 back before starting the array and rebuilding parity. If you simply add it back without going through the New Config step Unraid would promptly start to clear it (writing zeroes) to maintain parity when you start the array thus zapping the data you had just copied. An alternative approach that bypasses using New Config would have been carry out the format change at step 4 by stopping the array; changing the disk1 format to xFS; starting the array; formatting disk1 which would now show as unmountable and available to be formatted (to XFS) and then simply copied the data back to disk1 which would now be in XFS format. The advantage of this approach is that the array would remain in a protected state throughout.
  5. This is quite normal. You need to provide the -L option for the repair to proceed. Despite the scary sounding warning message data loss does not normally occur, and when it does only a file that was actively being written at the point things went wrong is affected.
  6. All attached storage devices (other than the Unraid boot USB drive count) even if they are not being used by Unraid. In your case that would be 1 + 2 + 8 = 11 devices.
  7. The default is RAID1 which means the available space is equal to the SMALLER of 2 dissimilar size disks. It is a known issue that when the disks are of different sizes that BTRFS tends to report space incorrectly.
  8. The rebuild does not check the existing contents. It just works out what should be there by reading all the other disks, and then overwrites whatever is on the disk being rebuilt.
  9. I can only suggest that you post your diagnostics again. If the drive is disabled after the rebuild process then that suggests that a write to it failed during the rebuild process. There might be something in the diagnostics to give a clue as to what exactly happened.
  10. I have never heard of anyone actually digging into the code under /usr/src. I always assumed it was there primarily to satisfy GPL legal requirements rather than expecting anyone to spend time digging into it (although you are of course entitled to dig as much as you want). Be interested to see if anyone can give you the pointer you want.
  11. Not sure why you interpreted my reply that way? I did not know if you had looked there and not found those source files of any use. What is NOT publically available is the source to the emhttp daemon. It is possible that is the part that handles recognising disks - I do not know.
  12. Have you tried looking under /usr/src on your Unraid server?
  13. Not sure what you are trying to ask. Any time you reboot your server any plugins you had installed before starting the reboot are automatically installed as part of the boot process.
  14. Since Unraid recognises drives by their serial number I would expect this to be a requirement. However this is my guess based on how other types of drives are handled - I have not looked into the code of the md driver to confirm this.
  15. You should be able to look at disk2 while the rebuild is in progress. Whatever you see there is what you will end up with when the rebuild completes. starting VMs/dockers should not affect the rebuild but may have a performance impact if they use array drives.
  16. If it had not run against the mdX device it would have invalidated parity which would not be a good idea. What are you rebuilding? If it is the disabled disk you will end up with whatever showed on the emulated drive before the rebuild.
  17. xfs_repair will not stop a disk being disabled - it is intended to fix it being unmountable. If the drive is disabled then it is the emulated disk that is being fixed. The standard way to clear the disabled state is to rebuild the disk.
  18. SMB1 is only enabled I believe if you configure it to be so by enabling NetBios support. The help for that setting explains this. Maybe a more emphatic warning might be given?
  19. /the commonest cause of this type of symptom is the flash drive has dropped offline for some reason. We would need the system diagnostics zip file to confirm that.
  20. The shutdown option is now available in the version of the Parity Check Tuning plugin i released today. I would be interested in any feedback on how I have implemented it or any issues found trying to use the shutdown feature.
  21. If you have not already done it you might also want to try power-cycling the server in case the GPU has got into a state that means it needs to be restarted from cold. in principle Unraid always unpacks itself afresh from the archives on the flash drive so that there should be no remnant of using GUI mode when you reboot after a power-cycle.
  22. Option to shutdown the server if any array or cache drive reaches the threshold you define has been added. If set this will function independently of any array or cache operation being active. The prime Use Case is seen as protecting your drives if your Unraid server's cooling fails for any reason.
  23. As far as I know most common brands work fine in the 6.8.3 release. 10Gb NICs have been around much longer than 2.5Gb ones.
  24. Are you saying that if you plug the USB drive into another machine you cannot read it’s contents? if you can immediately copy it’s contents as a backup. if you cannot read the USB drive contents but have a backup on an array drive then there are ways to get at that. Examples are to use a ‘trial’ Unraid licence or use a standard Linux ‘live’ build to get at the drive (Unraid uses standard Linux file systems). if none of those options are possible/practical for you then we need to know more before giving advice on the best way to recover.. For example do you run dockers/VMs that also need recovering?
  25. In practise most commodity hardware works fine unless it is niche in some way. The one ‘gotcha’ can be when you have hardware that has only recently been released for which there may not be the relevant Linux drivers already included in Unraid as it is not easy/practical to add them yourself for the average user. Limetech are very good about adding drivers to new releases once the manufacturer releases them for Linux but this takes time. A good example of this is the relatively recent situation of 2.5Gb NICs becoming commonplace on new motherboards. Support for these is not in the current 6.8.3 stable release but has been added to the 6.9.0 release (currently in beta). As was mentioned the best thing to do is post the details of what you are thinking of using as there is an excellent chance an existing Unraid user having experience of the hardware and can give feedback. If not direct experience they may be able to direct you to relevant forum posts.