Jump to content

itimpi

Moderators
  • Posts

    20,700
  • Joined

  • Last visited

  • Days Won

    56

Everything posted by itimpi

  1. Using the ‘df’ command from the unRaid command line can be useful to see what paths have what mount points and their type and therefore are not necessarily in RAM
  2. I think you will find that those particular locations are exceptions to the general rule of everything not under /mnt being in RAM . /var/lib/docker is where the docker image file is mounted, and /lib/modules is where the 'bzmodules' file on the flash drive is mounted as an overlay file system. @epp You are likely to get better informed feedback if you attach your systems diagnostics zip file (obtained via Tools->Diagnostics) to your NEXT post.
  3. I would suggest backing up the current USB drive, and then overwriting it with a fresh copy of the 6.9.0-rc2 release to see if that boots. If one of the .REC files was caused as result of a file not getting written correctly during the upgrade process that might make the current USB drive boot again. You can then copy the ‘config’ folder from your backup onto the USB drive to restore your licence file and current configuration settings.
  4. You are likely to get better informed feedback if you attach your systems diagnostics zip file (obtained via Tools->Diagnostics) to your NEXT post.
  5. Yes - they serve no useful purpose.
  6. You can use a pool for whatever purpose you like. It is just that unRaid insists in there being at least 1 drive defined as being part of the array even if it is never used for storing any files.
  7. Have you tried putting the USB stick into a Windows/Mas PC to check it? The presence of .REC files indicate there has definitely been corruption at some point
  8. The procedure is covered here in the online documentation that can be accessed via the Manual link at the bottom of the unraid GUI.
  9. The syslog clearly shows you are getting write errors. There is nothing obvious in the SMART report for the drive that I can see, but I would suggest clicking on the drive in the unRaid GUI and running an extended SMART test on it.
  10. Is there are any parity errors at that point (and it is the most likely time to have such errors) then if a dirk fails the rebuild is very likely to result in some level of corruption. on the rebuilt disk as the rebuild process relies on all other disks being read without error and the parity being valid. Putting it off means that many users effectively ignore the fact that parity needs to be corrected. In principle you should run a parity check any time there is any suspicion of it not being valid as the only acceptable result is 0 errors. Allowing it to be run outside prime time at least means that users are not prone to avoiding doing the check at all. The underlying assumption is that if there has not been any reason to suspect a parity problem then there probably is not one, but unclean shutdown is a reason to suspect there may well be a parity error. It used to be less of a problem when dirks were much smaller as the check did not take as long so users were less likely to skip doing it.
  11. No - I am afraid I am not prepared to do that. There are reasons for running the check even if you do not agree with them as without valid parity the ability to recover at a later date from a drive failure is compromised. If you can easily cancel such checks without manual intervention I can see a lot of users opening themselves up to data loss at a future date without realizing it. If I do anything in this area the most I will implement is to put the check automatically into a paused state so that it can run in increments outside prime time according to the schedule you have set for increments.
  12. I have seen posts suggesting that the fuse implementation used to support unRaid User Shares can impose throughput limits so it is likely to be the reason you are capped at the sorts of speed you quote.
  13. No, the automatic parity check after un unclean reboot is something that unRaid does independently of the plugin. You have been able to manually pause/resume/cancel such checks for some time now. The closest you can currently get using the plugin is to make sure that the option to pause/resume unscheduled parity checks is enabled, and then manually pause it on reboot and the plugin will then complete the check in increments (typically outside prime time) according to the schedule you have set for increments. Of course the other option is to cancel the check but this is definitely not recommended as it introduces the chance of parity getting out of step with the array without you realising it. I have thought of having a plugin setting that would automatically pause such a check without the user having to do the first pause manually after reboot but have avoided doing it as I would prefer the decision to do so to be an explicit decision by the user as unclean shutdowns should be an exception and not treated lightly.
  14. Not tried it myself so I do not really know. Have been waiting to see feedback from those who have tried it do get a feel.
  15. You can (optionally) elect with the 6.9.0-rc2 release to not use an image file at all for docker containers and store the files directly on the target drive.
  16. Just pushed a release that should now correctly track whether a parity check is scheduled or manual (or an automatic parity check after an unclean shutdown) and correctly obey the related pause/resume settings. At this point I "think" all outstanding issues have been resolved. If any unexpected behaviour is encountered then please let me know. As always open to suggestions for improvement.
  17. A pool HAS to use BTRFS if it includes (or will in the future include) more than 1 drive. If a pool is always going to be a single drive then XFS is more efficient and more resilient against crashes, but BTRFS has additional functionality.
  18. If you do try and sort out the Lost+Found folder then the Linux ‘file’ command can at least tell you what type of content (and thus the likely file extension) each file contains.
  19. Assuming you mean your unRaid GUI password then this might help. If you mean something else then p,ease clarify. unRaid disks should be readable with no issues on a PC running Linux (booting off a Linux ‘Live’ Dvd/USB stick is one way to load a temporary Linux system). If using Windows or MacOS then additional software is required to read them.
  20. OK That leaves the question about whether you are sure they are ‘good’ drives as adding unreliable drives to an unRaid array is never a good idea. If you are not sure then after copying the data off you could either run something like ‘preclear’ or the manufacturer’s test software against them to check them out before adding them. If you are happy with their status just add them and let unRaid run the automatic ‘clear’ operation which is also a minimal level of confidence test.
  21. #1 should "just work" if you have no VMs with hardware being passed through to them and n/ne of the disk controllers are RAID ones. #2 should be easy enough - but others may give some advice on best way to achieve it. #3 Not sure if these drives have data on them but if so basically this is just a case of copying the files to the array. However you need to provide more information on how they might be connected and what their current format is as this will affect any advice. Have you run any sort of confidence check on the state of these old drives?
  22. No you do not, but unRaid will not let you assign the new parity disks and then re-assign the old ones as data disks in one step. If you do not use the New Config tool then the process will be: Assign the new parity drive(s) and build new parity. If you want to maintain parity throughout then you need to replace each parity disk in turn and let parity build on that disk before proceeding After parity is built on the new disks assign the old ones to the array as new data drives. Wait while unRaid 'clears' them by writing zeroes to every sector Format the drives so they have an empty file system ready to receive files.
  23. @camjo99 I can confirm that there is a bug in the current version where the plugin gets confused as to whether a manual check was started manually or as a scheduled check and ends up treating it as a scheduled check. This typically (as you found) ends up with the check paused in the morning when it should not be. I can see that you encountered this by the fact that the parity.check.tuning.scheduled file was present. I believe that I have now resolved this in the version I have under test. I have been treating it as a low priority fix since the ‘workaround’ is to simply resume the check manually,so,the,impact,on end-users is minimal.
  24. The allocation has nothing to do with SMB/CIFs. If the files are ending up on a single drive then there is something else at play.
  25. Almost certainly! RAM issues are unpredictable as to the symptoms they cause but file system corruption is not an uncommon one. The only acceptable number of errors when running memtest is 0
×
×
  • Create New...