Jump to content

trurl

Moderators
  • Posts

    44,361
  • Joined

  • Last visited

  • Days Won

    137

Everything posted by trurl

  1. No. It is your docker image that is 10G (I usually recommend 20G but no more). And the usual location for docker image is in the system share instead of the appdata share where you have it. Appdata is where each of the dockers normally have their application data. Typically there will be a separate folder within appdata for each docker's working storage. This is separate from the docker image, which only contains the executables for the dockers. How much space these appdata folders take up will vary depending on the dockers you use. Since you have such a small cache, I recommend not caching any of your other user shares and just save all that space for your dockers to use.
  2. I always recommend putting any new information in a new post. That is why I always say: Tools - Diagnostics, attach complete diagnostics zip file to your NEXT post. New information is easier to find in a new post, and the thread will show us there are new posts to be read in the thread. If you just attach it to a previous post, then the thread won't show there is anything unread in the thread and so it might just get skipped over when we are looking for new things to read and help with.
  3. Should work, assuming you haven't left anything important out of your narrative. It's possible it won't work perfectly for some reason, but we can deal with that, possibly some corruption to be repaired, after the rebuild.
  4. Those diagnostics seem to indicate that the only share with data currently on cache is appdata. You don't have any dockers running but I see you have docker image configured to live in that appdata share. Not the "standard" method but shouldn't matter. But cache does have more than that used so perhaps you also have appdata folders for dockers that aren't currently running. What dockers have you had? Cache isn't all that full now so I'm not sure what makes you think there is a problem? I notice one share anonymized as A--s that has inconsistent Include/Exclude settings. You should only set include or only set exclude, not both. Include means ONLY those disks, Exclude means EXCEPT those disks. There is never any reason to set both and you have them set in conflict with each other.
  5. Just thought I would come back to OP for a moment: The fact that it didn't remember your disk assignments suggests that flash was corrupt. In particular, it couldn't read config/super.dat on the flash drive which contains your disk assignments. I'm not entirely sure I would trust the contents of flash at this point. You might put it in your PC and let it checkdisk. While there make a backup of flash. You should always have a backup of flash, stored somewhere you can get it so you can recreate flash in order to boot Unraid. If you have a good backup of flash you can always restore the config folder from that backup on a new install and everything will be back just as it was. For future reference, you can always download a zipped copy of flash from Main - Boot Device - Flash - Flash Backup. If it was simply a case of disk1 missing and no New Config, then it would have been possible to manually copy the data from the emulated (missing) disk1 to disks2 and 3. But since you had lost your drive assignments, you are working from a New Config. If you can't assign the old disk1 to slot1 for some reason, possibly an actual issue with the drive itself, then you need another disk to assign to slot1. Then the invalidslot command could be used with New Config to make Unraid rebuild disk1 from parity plus all other disks using the parity calculation. Instead of rebuilding parity from all the other disks using the parity calculation, which is what New Config will do by default.
  6. Are you sure you have reliable and adequate power? Adequate cooling? See this FAQ for setting up Syslog Server so you can get a syslog saved from before the crash to post: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=781601 Not related to your problem, but why have you allocated 100G to docker image? Have you had problems filling it? 20G should be plenty and making it larger won't fix anything, it will just make it take longer to fill. Also appdata has files on the array.
  7. As mentioned, the sd designations can and will change. They will especially change when disks are added, replaced, or removed. They can even change from one boot to the next for no apparent reason. So they are not useful for understanding how disks were previously assigned. If the disk was missing or disabled, its contents should have been available anyway from the parity calculation. This is known as "emulation". It is even possible to write to an emulated disk, parity is updated as if the disk were written, and so the written data can be recovered when the disk is rebuilt. It will, but it requires parity plus all of the other disks to calculate the contents of the missing disk. Parity disk by itself cannot recover anything. There is a command line method to tell Unraid to rebuild a disk other than the parity disk during New Config. You need a disk to rebuild to.
  8. Typically the forum is a lot better than this thread devolved to. Please consider reporting a post to the moderators, instead of responding in kind. Also note that almost everyone on the forum, including the moderators, are just users like yourself. Lets all try to help each other.
  9. I don't have a complete backup, but I do have 2 copies of anything I consider important and irreplaceable offsite. You don't have to backup everything. You get to decide what qualifies as important and irreplaceable.
  10. One thing I would add to this when you are planning to reuse disks that have data on them. You must always have another copy of anything important and irreplaceable. Parity is no substitute for a backup plan.
  11. I'm currently running linuxserver.io plex and my sister halfway across the country can access my plex just fine. That said, I know there are plenty of people using those others without that problem so I can only guess it is something wrong with your setup.
  12. I am not aware of any changes that might have impacted this. My best guess is what I already posted at the end of the previous page.
  13. There is a post about this pinned near the top of the General Support subforum, but here are the basic facts of the matter and there are a lot of ways to get there. In order to change a disk to a different filesystem, you have to format it. So, if the disk has any data on it you want to keep, you have to move or copy its data elsewhere before the format. I will save the details of exactly how to tell Unraid to format a disk after you get a disk ready to format.
  14. I notice you are correcting a lot of parity errors? Is that expected? Have you done memtest?
  15. I don't see any error message mentioned in your post. If you are referring to the login prompt you get from an attached monitor and keyboard, that is completely normal. The login username is root and the password is blank (just hit return for the password) or whatever password you had previously set for the root user. If that isn't what you mean then try again to explain your problem. If you do get logged in, see this "Need Help?" sticky pinned near the top of this same subforum for instructions on how to get us your diagnostics: https://forums.unraid.net/topic/37579-need-help-read-me-first/
  16. And later on looks like he formatted a rebuilding disk8. On second thought, I think this just indicates partitioning the replacement disk. Dec 24 11:29:25 NAS2 kernel: md: import disk8: (sdk) ST8000VN004-2M2101_WKD034DV size: 7814026532 Dec 24 11:29:25 NAS2 kernel: md: import_slot: 8 replaced ... Dec 24 11:29:48 NAS2 emhttpd: req (22): startState=RECON_DISK&file=&csrf_token=****************&cmdStart=Start ... Dec 24 11:29:50 NAS2 kernel: mdcmd (45): start RECON_DISK ... Dec 24 11:29:50 NAS2 emhttpd: writing GPT on disk (sdk), with partition 1 byte offset 32K, erased: 0 Dec 24 11:29:50 NAS2 emhttpd: shcmd (26404): sgdisk -Z /dev/sdk Dec 24 11:29:51 NAS2 root: Creating new GPT entries in memory. Dec 24 11:29:51 NAS2 root: GPT data structures destroyed! You may now partition the disk using fdisk or Dec 24 11:29:51 NAS2 root: other utilities. Dec 24 11:29:51 NAS2 emhttpd: shcmd (26405): sgdisk -o -a 8 -n 1:32K:0 /dev/sdk Dec 24 11:29:52 NAS2 root: Creating new GPT entries in memory. Dec 24 11:29:52 NAS2 root: The operation has completed successfully. Dec 24 11:29:52 NAS2 kernel: sdk: sdk1 ... Dec 25 12:59:41 NAS2 kernel: md: sync done. time=91774sec Dec 25 12:59:41 NAS2 kernel: md: recovery thread: exit status: 0 And since dockers, etc are on the array, probably this is the cause of the problem. In any case, all this should have been mentioned in OP.
  17. Also see some FCP warnings, one set to ignore. That one shouldn't be ignored, and you need to fix them all.
  18. Also, your system share has files on the array. And your docker image is larger than I usually recommend, possibly that is related to those dockers without templates. Have you had problems with filling docker image?
  19. Enable Syslog Server as explained in the FAQ here: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=781601 I also notice you have FCP warnings about several containers without templates. What do you plan to do about those?
  20. Looking again I see you don't have any cache, so I guess you can ignore that last paragraph, or not. You might take it into consideration.
  21. Why have you allocated 50G to docker image? Have you had problems with it filling? Making it larger won't fix anything, it will just make it take longer to fill. I always recommend 20G and it is unlikely you would need even that much unless you have one or more of your docker applications misconfigured. The typical way you get a docker image filling up, or get usage growing, is by having some path in an application that doesn't correspond to a mapping. Common mistakes are not using the same upper/lower case as mapped, or not using an absolute path. Possibly unrelated, but other things I see with your configuration that are not ideal: Most of your disks are very full, and some are still ReiserFS. Your appdata, domains, and system shares are on the array instead of cache. Those shares, and their dockers and VMs, will perform better on cache since they won't be impacted by parity. And those shares on the array will keep array disks spinning. So the preferred location for those shares is all on cache and set to stay on cache.
  22. Or maybe just post your diagnostics now. It might show something about how you have things configured that would give a clue to how it is breaking.
  23. Included in Nerd Pack plugin: https://forums.unraid.net/topic/35866-unraid-6-nerdpack-cli-tools-iftop-iotop-screen-kbd-etc/
×
×
  • Create New...