Jump to content

trurl

Moderators
  • Posts

    44,362
  • Joined

  • Last visited

  • Days Won

    137

Everything posted by trurl

  1. I don't use that container. Does it have mappings to any user shares with any contents on the array? Are you sure it is the reason for the spinups? I have NextCloud, my /data is mapped to a cache-yes user share. How are you doing it?
  2. Doesn't look like there were any I/O errors during the rebuild, and I didn't see any in the diagnostics, but there was some clutter in the syslog so I wanted to check. Do you have multiple browsers or browser tabs open to your server, including mobile? That would cause the clutter I was seeing. (csrf token) So, you need to repair the filesystem on disk1. Stop the array and start it in Maintenance mode, then click on Disk1 to get to its page and Click the button to check its filesystem. Capture any output and post it.
  3. The bitwarden cfg file will be leftover from when the share did exist. You can delete that or not. It isn't used since the share doesn't exist. I prefer to not have these cluttering up the diagnostics. I assume the log share you refer to is the one anonymized as l--s that I mentioned. The 'L' cfg file is settings for a share that doesn't exist, as I mentioned. There is no corresponding cfg file for the actual share that begins with 'l', so that share has default settings. Renaming that 'L' cfg file should work, if you rename it in some OS that respects upper/lower case. Do you use Krusader or Midnight Commander on your server? Those would let you do the rename directly on your server with it still running, then you would have to stop and restart the array to get the shares restarted with those settings. Another possibility is to just delete that 'L' cfg file and make new settings for the 'l' share.
  4. Your diagnostics has 2 files in the shares folder anonymized as l--s Due to DOS/Windows naming conventions, where upper/lower case isn't significant, one of these has (1) appended so they won't have the same filename. One of these has settings for a share that doesn't actually exist. The other share does exist but doesn't have any settings. This is often caused by a user accidentally creating a share by specifying the wrong upper/lower case in a docker mapping or other path. Linux is case sensitive. Any folder at the top level of cache or array is automatically a user share, even if you didn't create it as a user share in the webUI. I don't know what that share is named since it is anonymized, but you need to clean that up, since the cfg file that exists in config/shares on your flash drive doesn't actually correspond to the share due to the upper/lower case problem, and the actual share itself has no corresponding .cfg file, which means it has default settings. The default Use cache setting is No.
  5. Looks like you have a corrupted filesystem on disk1, and it was that way in both the "before" and "after" diagnostics. Have you rebooted since the rebuild? If not, don't. Post a screenshot of Main - Array Devices.
  6. Parity is in no way a backup. It doesn't even contain any of your files. And it is unlikely to help you in this situation. Haven't looked at diagnostics yet.
  7. Go to Tools - Diagnostics and attach the complete diagnostics zip file to your NEXT post.
  8. You can go directly to the correct support thread for any of your dockers by simply clicking on its icon in the Unraid webUI and selecting Support. Fix Common Problems plugin would have warned you about the deprecated container, and probably some other things as well.
  9. The only person in the thread who even told us which plex container they are using, and they are using the deprecated and no longer supported limetech plex.
  10. NO!!! no Do you have backups of anything important and irreplaceable? Go to Tools - Diagnostics and attach the complete diagnostics zip file to your NEXT post.
  11. No. Your ramdisk is not resized on the fly. Memory is used as I/O buffer and released to other processes as needed, but this has nothing to do with your ramdisk allocation.
  12. A cache-yes share will have most of its contents on the array and only new writes in cache until they get moved to the array.
  13. Not unusual. Syslog is a single file, at least until it rotates. If that single file already exists on a disk then updates to that file will be to the already existing file. As for domains, if it had files on the array, and those files were in use, then mover couldn't move them. On the other hand, if you set that share to cache-only when it already had files on the array, then mover wouldn't try to move them, since it only moves cache-prefer or cache-yes shares. See this FAQ for a more complete understanding of the use cache settings: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/page/2/#comment-537383
  14. If the logs were already on the array then it will update those.
  15. Here is another https://forums.unraid.net/topic/84039-one-drive-failed-while-rebuilding-with-a-new-drive-a-second-drive-failed/?do=findComment&comment=778735 but none of these are exactly where you are now. I recommend patience and see what @johnnie.black recommends.
  16. Here is a recent thread with the invalidslot situation: https://forums.unraid.net/topic/77771-multiple-errors/?do=findComment&comment=720061
  17. The CA Backup plugin as mentioned will archive these to the array. Cache-prefer can overflow to the array if cache runs out of space. Best if you just don't allow that to happen though. Might make some sense for domains or appdata, but system really needs to stay on cache and it probably can't be moved anyway. As mentioned this should stay on cache, and cache prefer will try to keep it there, but cache-only might be better. It is unlikely mover would be able to move these anyway since this share contains your docker and libvirt images and mover can't move open files. You might do Compute All on the User Shares page to make sure appdata, domains, and system are indeed all on cache where then need to be.
  18. It's already Saturday night where @johnnie.black lives, so we may be waiting a while on him. The reason I think you want to rebuild from parity is because I think that is likely to have a better result after what happened. Even with the rebuild there may be some filesystem repair to do on the rebuilt disk. Doing the rebuild on a new disk will still allow you to work with the original disk later and independently from that rebuild. The results from a filesystem repair can sometimes be pretty ugly, and that is what I expect if you attemp to repair the original disk after what happened.
  19. This suggests some confusion about how cache works in Unraid. Data is written to cache for cache-yes and cache-prefer shares. Cache-yes shares are moved from cache to array when mover runs, and cache-prefer shares are moved array to cache when mover runs. Other than that, there is no "pulling" data "that's not in the cache yet". If the data is already on the array, it won't ever go to cache unless it is in a cache-prefer share and mover moves it there.
  20. What we would like to do is not rely on anything already on the disk, but instead rebuild it from the parity calculation using the original parity and all the disks that parity is based on. In fact, as mentioned earlier:
  21. While we wait on a reply from johnnie, answer this question from earlier:
  22. What we had in mind was New Config with the invalidslot command. I don't know how this plays out now with the array already assigned and in maintenance mode. Let's see what @johnnie.black has to say about this.
×
×
  • Create New...