Jump to content

itimpi

Moderators
  • Posts

    19,826
  • Joined

  • Last visited

  • Days Won

    54

Everything posted by itimpi

  1. If you have files existing in the wrong location (particularly if they are duplicates) it is up to you to manually sort this out if you want to stop these warnings. Unraid will not do it automatically.
  2. You should post your system's diagnostics zip file in your next post in this thread to get more informed feedback. It is always a good idea to post this if your question might involve us seeing how you have things set up or to look at recent logs.
  3. It can also be worth rewriting all the bz* type files in the root of the flash dtive as this quite frequently seems to help if the flash dtive had no obvious problems.
  4. This is strongly suggested as plugins are not loaded during Safe mode. This the whole point of Safe Mode in that plugins can load new code/components into the core Unraid system which can destabilise the system and create scenarios that will not have been tested by Limetech.. If you are using the technique of renaming the .plg files then it takes a reboot to activate the change.
  5. It is only the plugins - Safe Mode does not disable docker. You can disable a plugin by renaming its .plg file under the config/plugins folder on the flash drive and rebooting. Tends to be easier than actually removing them and keeps any settings intact. Do the reverse to reenable them.
  6. The default for a multi-drive BTRFS pool is RAID1 for redundancy. The actual usable space will be that of the smaller drive as BTRFS reports it incorrectly for drives of different sizes. If you do not want redundancy then you can switch the pool to using the ‘single’ profile and then all 3TB will be available.
  7. Unraid currently requires at least one drive in the array. If you do not intend to use it to store anything a workaround is to use an old flash drive.
  8. All the red ‘x’ means is that a write to the drive failed. The vast majority of the time this is caused by an external factor (E,g. Cabling, power) and there is nothing wrong with the drive. Best thing to do is run an Extended SMART test on the drive and if it passes that then the drive is almost certainly OK.
  9. The syslog shows: Feb 1 20:38:20 Tower kernel: protection error, dev sdi, sector 15628053160 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 2 when you try to format. I do not have a link to hand but you should be able to find how to remove this by googling something like "Remove type 2 protection" or maybe someone else will chime in with the relevant forum link.
  10. Yes. The point is to get the log right up to the point immediately before the crash. You can get an unclean shutdown even when simply clicking the Reboot button on the main tab if Unraid was unable to stop the array successfully. It should be irrelevant what happened earlier. Have you confirmed that you can stop the array successfully before hitting Reboot within the timeouts set as described here in the online documentation accessible via the Manual link at the bottom of the Unraid GUI. In addition every forum page has a DOCS link at the top and a Documentation link at the bottom. The Unraid OS->Manual section covers most aspects of the current Unraid release.
  11. Are you sure the disks have failed? Disks being disabled due to factors other than the disk itself failing are very common. Running an extended SMART test on the drives is probably the best health check. BTW: You were using obsolete documentation. The official documentation for parity swap is here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page. The Unraid OS->Manual section in particular covers most features of the current Unraid release.
  12. You are likely to get better informed feedback if you attach your system’s diagnostics zip file to your next post in this thread. it is always a good idea when asking questions to supply your diagnostics so we can see details of your system, how you have things configured, and the current syslog.
  13. Have you disabled bridging on eth0 (needed to get stability when using macvlan as mentioned in Unraid release notes) since from the log you still seem to be using macvlan for docker networking rather that switching to ipvlan?
  14. It might be worth posting the full diagnostics to see if we can spot anything. Ideally this should be after trying a format of one of the drives. As you said these are enterprise drive they may have 'type 2' protection that needs removing before Unraid can use them.
  15. The key point is that he must not try to do the rsync until Unraid has successfully started up (in particular its User Share system). Perhaps in the past you were just lucky with the timing?
  16. Many people are now creating ZFS or BTRFS 'arrays' (as Unraid pools) which allows you to go well above the 30 drive limit. If I remember correctly you can have up to 60 drives in a pool and you can have multiple pools. I think it is unlikely that the main Unraid array type will ever go above the 28+2 limit because that is a lot of drives to only be protected by 2 parity drives, but hopefully in the future you can more than one of those.
  17. The parity check time is almost completely determined by the size of the parity drive. If it takes 24 hours for 4TB on your setup then expect it to take twice that for a 8TB parity drive. However your speeds seem a bit slow so maybe your disk controller is limiting your speeds. If your checks are taking that long do you have the Parity Check Tuning plugin installed so you can offload the check to increments run in idle times (albeit at the expense of an extended elapsed time).
  18. It is due to the fact that you had an unclean shutdown while a manual check was running and the plugin does not clear the state information for the manual check after the reboot. This should auto-fix itself when the automatic check finishes, but if you want to stop it immediately you can delete the party.check.tuning.manual file in the plugins folder on the flash drive. This issue is fixed for the next time I issue a plugin update but it did not seem urgent to get an update out so I am sitting on that fix. Unless an urgent issue arises I would like to wait until Unraid 6.12.7 (or even beta 6.13 beta) come out to check that they are not going to require changes to the plugin.
  19. Do you have the volume mapping for the Krusader docker container set to allow that level of access on the Unraid host? If not then you are looking at the location inside the docker container.
  20. I assume you meant 'unmountable'? Handling of unmountable disks is covered here in the online documentation accessible via the Manual link at the bottom of the Unraid GUI. That would likely have fixed the issue. In addition every forum page has a DOCS link at the top and a Documentation link at the bottom. The Unraid OS->Manual section covers most aspects of the current Unraid release.
  21. Unexpected reboots are normally hardware related (e.g. PSU or CPU overheating). The syslog in the diagnostics is the RAM version that starts afresh every time the system is booted. You should enable the syslog server (probably with the option to Mirror to Flash set) to get a syslog that survives a reboot so we can see what leads up to a crash. The mirror to flash option is the easiest to set up, but if you are worried about excessive wear on the flash drive you can put your server's address into the remote server field.
  22. I wonder if there is a problem related to this then? The OP problem could be explained by mover only trying to use the first one listed. I do not currently have a suitable setup to test this.
  23. Yes - but there are multiple mount points for the Data share - is that normal as well? I do not use ZFS in the main array so no experience of this, and not currently inclined to try bearing that there is a known problem with performance if you do this.
  24. Did not spot anything obvious either! It might be worth installing the File Activity plugin to see if that gives a clue as to what is accessing the drive.
×
×
  • Create New...