Jump to content

itimpi

Moderators
  • Posts

    20,781
  • Joined

  • Last visited

  • Days Won

    57

Everything posted by itimpi

  1. What file system type did you use for the cache? If it was XFS then the answer is yes. However if it was BTRFS then you can new drives dynamically. You may find this section of the online documentation to be of use.
  2. Has the parity check got beyond the size of the biggest data drive? Did you use the parity swap procedure to upgrade the parity drive? If so it seems that the space beyond the largest drive is not always correctly zeroed on the new parity drive and you need to run a correcting check once to fix this - after that future checks should be error free. However I do not think that is your problem here It looks as if there may be some other problem that needs looking at first as in your syslog there are continual Sep 12 14:46:15 Excelsior kernel: sd 1:0:2:0: attempting task abort!scmd(0x000000001e0cffe4), outstanding for 30528 ms & timeout 30000 ms Sep 12 14:46:15 Excelsior kernel: sd 1:0:2:0: [sdi] tag#2226 CDB: opcode=0x88 88 00 00 00 00 00 9c b9 b4 40 00 00 00 08 00 00 Sep 12 14:46:15 Excelsior kernel: scsi target1:0:2: handle(0x000b), sas_address(0x5000cca23c0d2c39), phy(2) Sep 12 14:46:15 Excelsior kernel: scsi target1:0:2: enclosure logical id(0x500605b0098b8100), slot(3) Sep 12 14:46:15 Excelsior kernel: sd 1:0:2:0: task abort: SUCCESS scmd(0x000000001e0cffe4) type messages in the syslog (sdi appears to be disk5). These seem to have started on Sep 8 - did you do anything to the system then? Is there any indication in the GUI of possible problems with disk5? I would suggest you Cancel the current check as no point proceeding if there are underlying hardware issues. Do NOT at this point attempt to run a correcting check as if you have a hardware issue you are more likely to end up corrupting parity Carefully check all connections (power and SATA) to disk5). Perhaps when changing drives you slightly disturbed an existing connection or did not quite perfectly seat one. Not sure if at that point you should retry the non-correcting check or do something else such as an extended test on disk5 You may want to wait to see if anyone else (in particular @JorgeB has any other suggestions.
  3. What do you have set as the default file system? It will automatically only be encrypted if that is set to one of the encrypted variants, or you explicitly set it to be encrypted by changing the file system by clicking on the drive (with the array stopped) before formatting it. You may find this section of the online documentation useful if you want to change the file system type (but note that it will wipe any existing content on the drive).
  4. That would only happen if there was some sort of underlying problem affecting both pools and array as in normal operation the pools are independent of the array.
  5. Yes, although the options are more restrictive. The docs have not been updated fully to take into account that ZFS is now available as a file system option in the Unraid 6.12.x releases.
  6. I guess as soon as the location is changed ideally the delete checkbox should be disabled?
  7. You don’t by any chance have a VM set to auto-start and use the GPU? Just asking as that would make it unavailable while the VM is running.
  8. To rebuild the contents of disk2 back onto the same drive you should be using the process documented here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page. if you are not sure what to do ask again here but with more detail on what you have tried so far.
  9. Ok - that explains that. Hopefully turning on mover logging will tell you what is going on. However do not leave that turned on for normal running as it could end up flooding the syslog.
  10. It depends on whether the check that is running is correcting or non-correcting. Since you have the Parity Check Tuning plugin installed the entries in the Parity History should tell you what type of check was run.
  11. Since you said you had done a lot of conversion recently then Sep 14 10:29:00 DarkTower root: Fix Common Problems: Warning: Share Tdarr set to not use the cache, but files / folders exist on the cache drive Warning from Fix Common Problems is likely to be relevant. Having said that not sure why that is being generated as the diagnostics suggest that share is set correctly.
  12. Have you tried clearing your browsers cache to see if that helps?
  13. It might be worth enabling mover logging to get a better idea of what is happening. The diagnostics only show a single inbocation of mover jysg a few seconds before the diagnostics were taken. You should also set a Minimum Free space settings for at least the cache pool and also for individual user shares where appropriate.
  14. Do NOT format the drive unless you want to lose all its data. The correct procedure to follow is covered here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page.
  15. If you use btrfs or ZFS file systems then the corruption should be detected before you copy anything to backups as they have built in checksumming.
  16. At one point btrfs drives did not work correctly. If you look through the change history, then you will see that adding support for btrfs drives was one of the improvements and that is what that text refers to rather than meaning only btrfs drives.
  17. What makes you think that? It works fine with xfs formatted drives.
  18. Do you have the appdata backup plugin installed to make periodic backups of your appdata folder? If so then if you have a recent backup of this you can get your containers back to the running state as of the backup using Apps->Previous apps to reinstall containers with previous settings.
  19. Quite a few of the browsers have introduced the idea of sleeping idle tabs so I think if this option is active it can cause problems if tabs are left open to the Unraid GUI.
  20. Yes. If necessary a Linux ‘live’ distribution can be temporarily used by booting it off a flash drive without needing to do a full install,. There are also tools for Windows and Mac systems for reading Linux file systems even those do not support them natively.
  21. Unraid uses standard Linux file systems, so the drives can always be read there outside Unraid if needed. Each array drive is a self-contained file system and can be read by itself.
  22. Unraid does not put (or require) anything into the appdata directory. The fact you have it means it must have been created by something you installed.
  23. The fact it was keeping disk17 from unmounting suggests that the docker.img file was on disk17? Ideally you want this on a pool for better performance. Note also that that particular problem was fixed in the 6.12.3 release so you may want to update your system.
  24. most likely something to do with the way you have your shares set up. You are likely to get better informed feedback if you attach your system’s diagnostics zip file to your next post in this thread.
×
×
  • Create New...