Jump to content

itimpi

Moderators
  • Posts

    20,779
  • Joined

  • Last visited

  • Days Won

    57

Everything posted by itimpi

  1. Are you trying to boot in UEFI or Legacy mode? Whichever one you normally use have you tried the other one?
  2. That does a cache->array move - I assume you got it backwards
  3. If you have duplicate files at the two locations then mover will not move them and you need to work out manually which copy is the current one and remove the other one.
  4. Only if the pool is set up to provide redundancy (i.e. RAID1). Your comment on size suggests this may not be the case. If you provide your system's diagnostics zip file then we can check for you.
  5. Was looking around and found a 24x2.5” drive cage in 5-in-3 form factor! A bit expensive but could be of interest to those interested in large arrays of 2.5” disks (or SATA SSD in that form factor). Did not realise you could get that many drives into such a small footprint as that is nearly 5 times the number of 3.5” drives that would fit in that space.
  6. Worked for me about 10 minutes ago.
  7. I wonder if that is because you tried to put the pool under /mnt/user and not directly under /mnt which is where pools normally get mounted?
  8. It is quite easy to convert a plugin that has hard-coded text strings to one that has them in an external file that can form the basis of translation. Limetech has a document on the steps involved. The plugin author does not need to understand any of the foreign language as it is the English text that acts as the key to the relevant strings in the translation file for any other language. My guess is that as soon as the plugin appears to be stable that this activity will be taken to externalize all strings needed. If not done directly by the plugin maintainer then third parties can easily submit pull requests to add this capability.
  9. not quite sure what app you are looking for? The pre-clear is available via Unassigned Devices version (Which requires the Unassigned Devices plugin to be installed which is almost universally the case) or as a the docker container.
  10. That sounds like the built-in Parity Check scheduling feature - not the parity Check tuning settings?
  11. Limetech have already stated that they intend to make the current Unraid array effectively a pool type, and that in the future a feature will be to have any mix of pool types you want. The difference being that you will then be able to have BTRFS, ZFS and Unraid type pools in whatever combination of primary and secondary storage that you want. Also note that Unraid type arrays/pools can already have individual disks in BTRFS or ZFS formats without losing the current easy expansion capability of the Unraid array. Do not forget that you keep the flexibility you mention even in the extremely unlikely event you mention of XFS being removed from Linux (which I would think would be at least 10 years off if it ever happens at all). After all it wil be about 5 years between the first announcement that ReiserFS would be deprecated and the date that it gets removed from Linux kernels. That is despite the fact that ReiserFS has severe technical limitations (i.e. it cannot support modern large drives) that are not the case with XFS.
  12. Without diagnostics taken when the problem occurred (and before rebooting) then we have no idea why the drive got disabled. Did you ever try to rebuild parity? It is quite likely the problem was nothing to do with the upgrade.
  13. You need to set the array as secondary storage and then make sure mover is set to move files to the array. You then run mover to carry out the transfer. If moving any of the appdata, system or isos dhares then you want the docker and VM services disabled as they hold files open which prevents them being moved.
  14. Not directly. Modern hard drives are meant to reallocate bad sectors if a write to them fails so in that sense you do not ‘mark’ bad sectors, but it is possible running a pre-clear might cause problem sectors to be reallocated. I would suggest your best chance is to run a pre-clear (you can skip the initial read phase) and if that succeeds run an extended SMART test. If they both pass you can probably use the drive with reasonable safety.
  15. Not completely without his disk drives! You could do a partial setup using your own disk drives but then when he got the flash dtive from you at the very least he would have to use the New Config tool and set up the array at his end, and whatever he sets up would need to be compatible with any assumptions you made about drive assignments.
  16. I have never heard even a hint that XFS might be removed. The only reason I could see it ever being removed is if the Linux kernel dropped support for XFS (as is scheduled to happen for ReiserFS) but I think this is extremely unlikely. is there any reason that you even thought XFS might be removed?
  17. Had you stopped the docker and VM services? They keep those files open which would stop mover from transferring them.
  18. It could be that one of the timeouts mentioned in the documentation is set to a value where things sometimes complete in time and other times do not. If so then increasing them slightly might make the issue less likely to occur.
  19. If you can get diagnostics from the CLI then the diagnostics zip file is written to the ‘logs’ folder on the flash drive and that is what you should post.
  20. I agree. I might issue a pull request to the documentation (it is on GitHub) to add this as a suggestion. It needs updating anyway to say that starting with the Unraid 6.12 release the partition is now part of the device name (e.g. /dev/md1p1 instead of just /dev/md1). I ‘think’ this change has been made as part of a future plans to support ZFS format drives bought across from other systems which have the ZFS partition on a partition other than the first.
  21. @Swarles The behaviour you should now expect is: No action will be taken for manual checks unless you have activated that option within the plugins setting. If at the start time for the increment the check is paused then it will be resumed If at the end time for the increment the check is running then it will be paused. if you have enabled the options to pause if mover/ca backup are running or overheating then this happens independently of whether you have manually paused or resumed a check. The only caveat is that if paused due to one of these conditions AND manual checks are set for increments AND one of them caused a pause within the increment period then no resume is issued if they end after the increment end time. if you manually pause or resume a check then as long as the above conditions are not met then the plugin should take no action. I THINK this is what you described as happening? I also think it will meet most peoples needs as the expectation is that you will set increments to run in idle periods when there is probably nobody using the server or looking at the GUI, and manually issue pause or resumes outside this time if you want them.
  22. I have decided I like the new behaviour but would like the timeout to be longer (possibly something like 10 seconds) Maybe it IS worth making the timeout configurable (with 0 meaning no timeout) to satisfy both camps?
  23. Thought it was worth mentioning that the latest plugin update should now behave as you expected (at least it appears to in my testing). I have also added putting an entry in the syslog when a manual action is detected (although it my take a few minutes for the plugin to detect it) which should help make it clearer if something is happening that is not as expected.
  24. I think if we knew that it would be easy enough to either work around it or raise a bug report on Firefox to fix it (assuming there is one not already raised).
  25. The only way with that combination of drives to get extra space would be to run with a single 16TB parity drive (and thus only protected against a single drive failure) and have 3 data drives. Not necessarily a bad a idea with that few drives as the chance of two simultaneous failures are low but it depends on your tolerance for risk. As was mentioned no parity drive can be smaller than the largest data drive.
×
×
  • Create New...