trurl

Moderators
  • Posts

    43889
  • Joined

  • Last visited

  • Days Won

    137

Everything posted by trurl

  1. Mover ignores cache-no user shares. You must set it to cache-yes to get it moved to the array.
  2. Looks like you filled cache and then log was filling with move errors. You have a user share anonymized as n-------d set to cache-prefer. Prefer means try to keep all of that share on cache. Possibly this is the reason you filled cache. What is the purpose of this user share? Normally, only appdata, domains, system shares would be set cache-prefer. Your appdata has some files on disk2, maybe because there wasn't room on cache.
  3. The diagnostics you posted only had 2MB for this. Did you edit your diagnostics?
  4. Log files aren't all that large. What do you get from the command line with this? du -h /var/log
  5. Whatever is being emulated is what is being rebuilt. On Main, does it say the emulated disk is unmountable? Diagnostics now might be better than waiting until after the rebuild, especially since you have a new problem.
  6. Already copied into this General Support thread. Please don't start another for this issue.
  7. I have copied this report to General Support. Please go there for further discussion. Changed Status to Closed Changed Priority to Other
  8. Your use of the term "file system" is not the usual meaning. I think you mean something like a file explorer. Since Unraid is a NAS, you can use whatever file explorer you use on the client computers (your PCs) to "explore" the shares on Unraid over the network. Normally, you don't want anybody except the person managing your server to work directly on the server. Only root user has access to the Unraid webUI, and that user has unlimited access. Your other users don't need Krusader, though it might be useful to the person managing the server. Your "not computer guys" can just use Windows Explorer to work with network shares on the Unraid server.
  9. That seems to indicate that the version on cache is the currently used one, as it should be. You can delete the system folder on disk1. Running mover more frequently often doesn't help anything. It is impossible to move to the slower array as fast as you can write to the faster cache. Mover is really intended for idle time. Your cache is really pretty large. Are you really writing hundreds of gig every day? You might consider writing some of that directly to the array. Why is anything writing to docker.img? Normally you want your docker applications writing to mapped storage. Docker.img is really just for the executable code of your docker containers. The docker applications themselves should not be writing into it. My docker.img is 20G. That is the size I usually recommend. I have 17 dockers running, and they use less that half of that 20G. You have 50G allocated to docker.img. Shouldn't be necessary to have that much. Don't know if you have filled it or not. Diagnostics would tell. Any application that is writing to a path that doesn't correspond to a container path that is mapped to a host path is writing into the docker.img. Common mistakes are specifying a different upper/lower case than that in the mappings, or writing to a relative path (not beginning with /)
  10. trurl

    LOG 100% use

    Do you have open ports?
  11. Rebuilding a failed drive uses the parity data with the data from all the other drives to calculate the data to rebuild the failed drive. New Config enables all disks again and rebuilds parity based on all the disks in the array. If a disk is missing, the parity rebuild will not include that disk. And if the disk is not missing but is returning bad data, that bad data will be part of the parity rebuild. So, after New Config rebuilds parity, anything that parity had before that would have allowed a failed or missing disk to be rebuilt is no longer there. Invalidslot makes New Config rebuild a specified data disk instead of rebuilding parity.
  12. Might be more useful to start your own thread with your specific details
  13. Not exactly the same thing you were asking about, but in case you don't know. it is possible to pause and resume parity as long as you don't stop the array or reboot. There is also a plugin that lets you schedule this so you can do parity checks in smaller chunks.
  14. Here is how this whole disable and emulation thing works. When a write to a disk fails, Unraid disables the disk. If the disk is a data disk, the write is still used to update parity. So that failed write can be recovered when the disabled disk is rebuilt. The disk is disabled because it is no longer in sync with parity. After a disk is disabled, the actual disk is not used again until it is rebuilt (or in your case, a New Config, see below). Instead, the disk is emulated by reading all other disks to get its data. The emulated disk can be read, and it can also be written by updating parity. So writes to the emulated disk continue even when the disk is disabled. Those writes can be recovered by rebuilding the disk from the parity calculation. And, rebuilding the disk is the usual way to recover from this, because the disk is no longer in sync with parity, since parity contains writes that happened with the disk disabled. It is also possible to enable all disks again by setting a New Config and rebuilding parity, thus getting parity back in sync with all the data disks in the array. But any writes to that disk that happened with the disk disabled are lost when you take that option. In your case, the actually failing disk14 was contributing bad data to the emulation of those disabled disks. That resulted in those emulated disks being unmountable. But the actual disks were still mountable, as we discovered. Technically, parity is out-of-sync with those disks, but maybe not much. The rebuild of disk14 is relying on that "not much". One final note. If a read from a disk fails, Unraid will try to get its data from the parity calculation by reading all the other disks, and then try to write that data back to the disk. If that write fails the disk is disabled. So, it is possible for a failed read to cause a failed write that disables the disk.
  15. A drive failing can cause bad writes to that drive certainly. Writes to one drive are unrelated to writes to other drives since each disk is an independent filesystem. Except of course that parity is always updated when a data drive is written, but even there, parity is only disabled when a write to parity fails. Do you really know it was "within just a few moments"? If you know exactly when these events occurred that would point to where in your syslog to look for them.
  16. Reviewing your diagnostics, I think I must have been referring to the fact that you have allocated 80G to docker.img, and are using 26G of that. My usual recommendation is only 20G allocated for docker.img. Anytime I see someone with more than that it makes me wonder if they don't have some application writing to a path that isn't mapped. I have 20G allocated to docker.img. I am running 17 dockers, and they are using less than half of that 20G. Have you had problems filling docker.img? Making it larger will not fix anything, it will only make it take longer to fill. The usual reason for using more space than necessary in docker.img is for an application to write data into the docker.img. That will happen when it writes to a path that isn't mapped to host storage. Common mistakes are writing to a path that doesn't exactly match the mapped container path with regard to upper/lower case, or writing to a relative path (what is it relative to?)
  17. These things are unrelated. Unraid disables a disk when a write to it fails. Simple as that.
  18. Some SMART attributes are more serious than others. Which one was it that you acknowledged?
  19. trurl

    LOG 100% use

    Don't know if this is related: Aug 26 18:50:02 Acu-Tower vsftpd[25980]: connect from 184.105.139.70 (184.105.139.70) Aug 26 19:42:34 Acu-Tower vsftpd[7115]: connect from 170.130.187.58 (170.130.187.58) Aug 26 20:00:51 Acu-Tower vsftpd[23429]: connect from 192.241.227.131 (192.241.227.131) ... Aug 26 22:18:44 Acu-Tower vsftpd[16593]: connect from 104.152.52.34 (104.152.52.34) ... Aug 27 01:51:23 Acu-Tower vsftpd[16342]: connect from 91.241.19.109 (91.241.19.109) https://www.abuseipdb.com/check/184.105.139.70 https://www.abuseipdb.com/check/170.130.187.58 https://www.abuseipdb.com/check/192.241.227.131 https://www.abuseipdb.com/check/104.152.52.34 https://www.abuseipdb.com/check/91.241.19.109 And lots more like that.
  20. Normally, when you New Config, parity is rebuilt by default. You don't want to rebuild parity, you want to use existing parity to rebuild disk14 instead. Invalidslot lets you specify a different disk to rebuild during New Config.
  21. yes, according to page 24 of the manual (VT-d) https://download.gigabyte.com/FileList/Manual/mb_manual_b460m-ds3h-ac_e_1003_v2.pdf
  22. Are there any symptoms while it is running? Are you sure there isn't a power issue?
  23. no invalidslot might be the more direct way to do it all at once instead of trusting parity first, then replacing. But, invalidslot requires overriding the webUI from the command line at one point in the process. Let us know when you get the replacement and we'll see if we can reach consensus.