Jump to content

itimpi

Moderators
  • Posts

    20,790
  • Joined

  • Last visited

  • Days Won

    57

Everything posted by itimpi

  1. You must still have the standard parity check set to the schedule when you want the parity checks started, but you must set the Cumulative Parity Check setting to No if you are using the Parity Check Tuning plugin to handle pause/resuming of increments. I'll reword that message to make it slightly clearer.
  2. You have: system shareUseCache="yes" # Share exists on disk1 which means that if the docker service is even active this will keep disk1 and parity spun up even when no containers are running. You also have: appdata shareUseCache="no" # Share exists on disk2, cachepool which is a bit strange. Normally you want this share completely on the cache to avoid spinning up disk2 and parity any time a docker container is running that has files on disk2. You may find this section of the online documentation accessible via the Manual link at the bottom of the Unraid GUI useful. In addition every forum page has a DOCS link at the top and a Documentation link at the bottom. The Unraid OS->Manual section covers most aspects of the current Unraid release.
  3. This is expected - mover will never overwrite an existing file. You need to decide which copy you wish to keep and delete the other one. Dynamix File Manager is probably the easiest way to do this.
  4. Have you ever run a correcting parity check? If not then you will keep getting the same errors until you do.
  5. I see this in the syslog Mar 13 03:40:01 PolaFlix kernel: XFS (md1p1): Metadata corruption detected at xfs_dinode_verify+0xa0/0x732 [xfs], inode 0x200000b7 dinode Mar 13 03:40:01 PolaFlix kernel: XFS (md1p1): Unmount and run xfs_repair You should run a check filesystem and repair on disk1 to fix this (via the GUI). Less important I see this: Mar 12 14:27:00 PolaFlix root: Fix Common Problems: Warning: Share La Biblioteca de Pola set to not use the cache, but files / folders exist on the cache drive which you probably also want to fix.
  6. You should be disabling the docker and vm SERVICES - not just running instances.
  7. Not able to help with that I am afraid. Seems to happen reasonably frequently with some people while others see their flash drives lasting forever. I suspect it is something to do with the USB ports on the motherboards in some way, As always we find that USB2 drives and/or USB2 ports seem to be more reliable.
  8. In effect yes, although I would reformat it first if you try this. Note that would mean that you would need to run the make_bootable.bat (in admin mode) afterwards if you want to boot in legacy mode.
  9. Have you tried plugging the flash drive into a PC/Mac to see if it can still read it? If so then rewriting it sometimes helps
  10. I wonder if manually editing the config/rsyslog.cfg file on the flash drive to have a path something like /mnt/disks/SyslogServer for the local folder to use a device that is handled by the Unassigned Devices plugin will work - I think I will try testing it out. Of course even if it does it will not be officially supported at the moment and it will then mean you cannot view/alter the setting via the GUI but might be a viable short-term solution.
  11. You should post your system's diagnostics zip file in your next post in this thread to get more informed feedback. It is always a good idea to post this if your question might involve us seeing how you have things set up or to look at recent logs.
  12. Make sure that you do not have things set up so that you are falling foul of the behaviour mentioned in the Caution in this section of the online documentation accessible via the Manual link at the bottom of the Unraid GUI. If you think it is not that then you should post your system's diagnostics zip file in your next post in this thread to get more informed feedback. It is always a good idea to post this if your question might involve us seeing how you have things set up or to look at recent logs.
  13. Since you have not mapped it to a location outside the container this path is only internal to the container.
  14. Who is this 'you folks'? I am just an Unraid user. You could set it to write to a share that is on a pool instead. Might not catch the initial start-up sequence but should still help with crashes. I notice that the drop-down for locations has a 'Custom' option but unfortunately it does not look like that is selectable. Seems a bit pointless having it if it is not.
  15. OK at that point parity WAS valid I think (unless I have introduced a bug into the plugin so it is not recognising read-checks) and that is what the message you showed is saying, and parity has been disabled since then. Why is not clear. Since you reset the system we have no log from when the drives was disabled unless you happen to have the syslog server enabled with the mirror to flash option set.
  16. So what does the history say about the last check?
  17. The SMART checks happen BECAUSE a drive has just been spun up so this is a symptom, not a cause. You need to identify the cause by a process of elimination by disabling everything that could access the disks (docket, VMs, LAN clients) and then bringing them on one at a time.
  18. Probably the most important setting to get correct is that you normally want to only include the folders/shares you really want to be scanned, but you do not mention that setting.
  19. Since parity is disabled, last check would have been a read-check only so I agree it should not say parity is valid. You might want to consider installing the Parity Check Tuning plugin. It has been designed to make parity checks less intrusive when you have large disks to check so that checks can take a long time. Even if you do not use its other features it will also enhance Parity History entries to give more information such as why the check was run and what type of check it was.
  20. Do you get CRC errors if you set the SSDs to never spin down?
  21. There can be. Ideally to get best performance you want things that might otherwise keep disks spun up to be on a SSD pool external to the main Unraid array.
  22. The server restarting by itself is almost invariably hardware related. Commonest causes are PSU struggling or CPU overheating.
×
×
  • Create New...