Jump to content

itimpi

Moderators
  • Posts

    20,789
  • Joined

  • Last visited

  • Days Won

    57

Everything posted by itimpi

  1. The syslog in the diagnostics is the RAM copy and only shows what happened since the reboot. It could be worth enabling the syslog server to can get a log that survives a reboot so we can see what happened prior to the reboot.
  2. That error indicates a corrupt docker image file and is very unlikely to be related to the upgrade. also your diagnostics show that you are using macvlan for your docker networking. This is known to cause crashes in the 6.12.x series. To avoid this you should either switch to using ipvlan or follow the directions in the 6.12.4 release notes to continue using macvlan (in particular disabling bridging on eth0).
  3. Then you need to make sure you have followed the directions in the 6.12.4 release notes about disabling bridging on eth0.
  4. I did some testing today on the 6.12.6 release and in all cases the type of check was what had been set under the scheduler settings, so not sure why you seemed to get something different.
  5. No idea about that script, but if you install the Parity Check Tuning plugin it gives you the ‘parity.check’ CLI command that makes it easy to do what you seem to be looking for.
  6. If you ran the xfs_repair from the terminal there is a significant chance you did not get the device name quite right. Do you know what you used? it is always better if possible to run xfs_repair via the GUI as it will automatically use the correct device name removing this as a potential source of error.
  7. Did you use the New Config tool as was suggested to change the assignments?
  8. You have the Mover Tuning plugin installed and for some reason that seems to stop mover working for some people. Disable it and see if that helps
  9. Have you read the online documentation on how the settings for User Shares work? In particular you need to understand the High-Water allocation method and Split Level. You have a very restrictive Split Level on many of your shares that can easily be forcing files to particular drives.
  10. This error must have been there before the upgrade, but you are just now noticing them There are various other options for similar functionality: Dynamix File Manager plugin. This is likely to be a standard part of a future Unraid release although at the moment it is a plugin you have to install. The 'mc' (Midnight Commander) from a console session for a character based 'pseudo' GUI. This ships as standard with Unraid and has for a long time. Do it yourself from a console session using Linux commands. A lot less user friendly and error prone if you are not used to the Linux command line. At the end of the day you are going to have to manually move files from one folder to another so you can eliminate folders with the same name but different case The error arises due to the fact that Linux file/folder names are case sensitive whereas SMB is not.
  11. Removing the Mover Tuning plugin seems to fix this. Quite why that plugin is causing problems I have no idea.
  12. There have been reports of S3 sleep issues on recent Unraid releases. Symptoms definitely sound as if S3 sleep may be the problem, so try disabling and see if the problem disappears.
  13. You should post your system's diagnostics zip file in your next post in this thread to get more informed feedback. It is always a good idea to post this if your question might involve us seeing how you have things set up or to look at recent logs. It is not at all clear what the current state of your system actually is.
  14. You should post your system's diagnostics zip file in your next post in this thread to get more informed feedback. It is always a good idea to post this if your question might involve us seeing how you have things set up or to look at recent logs. At the moment it is not at all clear what the state of your system is. BTW: The xfs_repair command you ran is not the ideal one as at the very least it would invalidate parity. You should always run xfs_repair via the GUI if at all possible as it uses a variant that maintains parity.
  15. Did you get an acknowledgement when you submitted a message to support? Just asking as there should be an automated response when you raise a ticket but it seems some were not getting through. Only support re likely to be able to help you with your issue.
  16. If a drive cannot complete the Extended SMART test it should always be replaced. If it is still under warranty then failing that test is normally enough to get a RMA.
  17. The syslog in the diagnostics is the RAM version that starts afresh every time the system is booted. You should enable the syslog server (probably with the option to Mirror to Flash set) to get a syslog that survives a reboot so we can see what leads up to a crash. If using the mirror option the syslog file is stored in the 'logs' folder on the flash drive.
  18. Not sure what error you are talking about? You should post your system's diagnostics zip file in your next post in this thread to get more informed feedback. It is always a good idea to post this if your question might involve us seeing how you have things set up or to look at recent logs.
  19. It looks as if the system incorrectly started a correcting check: Dec 4 03:00:01 Tower kernel: mdcmd (36): check Dec 4 03:00:01 Tower kernel: md: recovery thread: check P Q ... and then later found errors on parity1. I will try running a test to see if I can replicate the incorrect starting of a correcting check on my system in case it is a new bug in the 6.12.6 release. However why errors were found I have no idea. The fact that docker containers were running should be irrelevant. BTW: With such large parity disks you may find it advantageous to install my Parity Check Tuning plugin? Even if you do not use all its features the fact that it adds to the Parity check history entries whether a check was correcting or not may be useful information.
  20. It might be worth trying something like ls -d /mnt/*/appdata to see what shows up.
  21. It is not /mnt/user that points to the pool, but folders within /mnt/user that correspond to exclusive shares. This is expected behaviour.
  22. It is basically a trial-and error approach. You start by renaming .plg files in the config folder on the flash drive to have a different extension and then rebooting as that stops a particular plugin loading. Renaming back to the .plg and rebooting reactivates that plugin. Whether you try in batches or individually is up to you and how patient your are.
  23. You have file system corruption on disk1. You should run a check filesystem on disk1 to check/repair this corruption.
  24. That message is coming from a plugin since you say this only occurs if you do not boot in Safe Mode. You need to find the culprit and remove it to get rid of the message. Did you check file system or rebuild? They are completely different operations. Only a rebuild cleared the disabled state.
  25. I think that if you are using a ZFS raidz pool then performance is normally good enough that there is no need to cache writes to that pool and thus no need to get mover involved at all.
×
×
  • Create New...