mrtek007 Posted March 24, 2021 Share Posted March 24, 2021 I am currently running Unraid 6.9.1. I removed a VM; but thank goodness i have a backup on another nas. I have that share mounted to unraid and when i try to copy over the VM folder, which is about 80gigs. i get the following error message "No space left on device" I have in total 8TB and have only used 2TB, so i have enough space. this VM was already on unraid but i am just not sure why i keep getting this error message when i copy it from my backup nas. this actually crashes my array and i have to reboot the system. Please help. thanks. Quote Link to comment
John_M Posted March 24, 2021 Share Posted March 24, 2021 Go to Tools -> Diagnostics and post the zip file. Quote Link to comment
mrtek007 Posted March 24, 2021 Author Share Posted March 24, 2021 attached is the file, i am not sure if it'll have anything cause when the system crashes the logs seem to be cleared and only show when the system comes back up. olympus-diagnostics-20210324-0621.zip Quote Link to comment
ChatNoir Posted March 24, 2021 Share Posted March 24, 2021 Is it normal that you have 180 shares ? It is unusual to see that many. Particularly because most of them are : # Share exists on no drives You might have an app misconfigured that write at the wrong level and create folders as shares, or made a manual copy ? On the initial topic, what particular share(s) is not being moved ? I don't want to check all of the 180 shares. Quote Link to comment
mrtek007 Posted March 24, 2021 Author Share Posted March 24, 2021 (edited) the only share that i am moving is the one for the VM; which is under domains/Ldownload. I had done it before on the previous version of unraid, so i am not sure if this is a bug or just something i have misconfigured that i had not seen before. Also, i don't have 180 shares. which is strange. Edited March 24, 2021 by mrtek007 Quote Link to comment
trurl Posted March 24, 2021 Share Posted March 24, 2021 1 hour ago, mrtek007 said: i don't have 180 shares. which is strange. Any folder at the top level of cache or array is a user share. If you specify a path at the top level of cache or array or /mnt/user, it is automatically a user share even if you don't specifically create it as a user share in the webUI. User shares have .cfg files which can be seen in your diagnostics in the shares folder. Those .cfg files will be created when a user share is created, but won't be deleted if the user share no longer exists. This clutters up your diagnostics with a lot of share .cfg files that have no files associated with them. 2 hours ago, ChatNoir said: I don't want to check all of the 180 shares. No point in checking any of them in those diagnostics because the array isn't started so no shares exist and we can't even see if the disks mount. @mrtek007 Start the array and post new diagnostics. Quote Link to comment
mrtek007 Posted March 24, 2021 Author Share Posted March 24, 2021 attaches is the array started. olympus-diagnostics-20210324-1236.zip Quote Link to comment
Squid Posted March 24, 2021 Share Posted March 24, 2021 Looks like all those .cfg's were created by Ransomware Protection (long deprecated) and can be very safely deleted (Squidbait-*) Quote Link to comment
mrtek007 Posted March 24, 2021 Author Share Posted March 24, 2021 (edited) how would i remove those? is that what is causing my issue? Edited March 24, 2021 by mrtek007 Quote Link to comment
Squid Posted March 24, 2021 Share Posted March 24, 2021 Explore the flash drive via the network Config / Shares folder. Delete them like any other file. There's no harm in leaving them, except that it makes diagnostics a PITA Quote Link to comment
mrtek007 Posted March 24, 2021 Author Share Posted March 24, 2021 So i've been playing around and noticed something. rootfs has 16g but when I do an appdata restore it filling up and giving me that error of no space on disk. this is how it looks when i rebooted the server again. i am assuming that it is the same issue when i copy my VM over to the unraid server. Filesystem Size Used Avail Use% Mounted on rootfs 16G 860M 15G 6% / devtmpfs 16G 0 16G 0% /dev tmpfs 16G 0 16G 0% /dev/shm cgroup_root 8.0M 0 8.0M 0% /sys/fs/cgroup tmpfs 128M 204K 128M 1% /var/log /dev/sda1 30G 651M 29G 3% /boot overlay 16G 860M 15G 6% /lib/modules overlay 16G 860M 15G 6% /lib/firmware tmpfs 1.0M 0 1.0M 0% /mnt/disks tmpfs 1.0M 0 1.0M 0% /mnt/remotes Quote Link to comment
mrtek007 Posted March 24, 2021 Author Share Posted March 24, 2021 ok so i've cleared all those shares olympus-diagnostics-20210324-1342.zip Quote Link to comment
Squid Posted March 24, 2021 Share Posted March 24, 2021 Appdata restore has to save the log somewhere. If docker is enabled, then it saves it on a drive (inside the docker.img file) If it's not enabled then it has to store it in RAM. Quote Link to comment
mrtek007 Posted March 24, 2021 Author Share Posted March 24, 2021 Docker is enabled but it still seems to be using rootfs Quote Link to comment
trurl Posted March 24, 2021 Share Posted March 24, 2021 Something screwy with your setup somewhere. According to your diagnostics, you don't have cache, but appdata has some files on cache. I am guessing you have somehow somewhere specified an appdata path that isn't actually on a disk or user share and so it is instead in RAM. Quote Link to comment
mrtek007 Posted March 24, 2021 Author Share Posted March 24, 2021 (edited) I noticed that too, but i've never had any cache drive for the many years i've been using unraid. I guess my only option will be to install another drive and assign it as a cache drive? or is there a way to change the shares that specify "prefer:cache" or "yes:cache" to say no. Edited March 24, 2021 by mrtek007 Quote Link to comment
trurl Posted March 24, 2021 Share Posted March 24, 2021 You need to figure out where you are specifying a path to your nonexistent cache. Go to Settings - Docker and disable. Do the same for Settings - VM Manager. Reboot, start the array, and post new diagnostics. Quote Link to comment
mrtek007 Posted March 24, 2021 Author Share Posted March 24, 2021 here it is olympus-diagnostics-20210324-1419.zip Quote Link to comment
trurl Posted March 24, 2021 Share Posted March 24, 2021 In those diagnostics, without dockers, your appdata share is only on the array. One of your dockers must be specifying a path on "cache" such as /mnt/cache/appdata, and since you don't have cache, that path is in RAM. Quote Link to comment
mrtek007 Posted March 27, 2021 Author Share Posted March 27, 2021 So I found that 1 of my dockers was pointing to /mnt/cache/appdata. I modified that but it still gave me issues. I ended up installing a 500gig SSD and assigned it as cache. everything seems to be working fine now. my thing is, what if I didn't have a cache drive. I get that it uses memory for it but shouldn't the software not allow for it to be completely taken over and bringing down the whole array. I really didn't have this issue until recently after I updated it to 6.9, could it be a bug? Quote Link to comment
trurl Posted March 27, 2021 Share Posted March 27, 2021 1 hour ago, mrtek007 said: So I found that 1 of my dockers was pointing to /mnt/cache/appdata. I modified that but it still gave me issues. Once the docker had created that path, you would have to manually remove it or reboot to get it to go away. Quote Link to comment
Squid Posted March 27, 2021 Share Posted March 27, 2021 10 hours ago, mrtek007 said: So I found that 1 of my dockers was pointing to /mnt/cache/appdata. Which one, and did you install it via Apps? Quote Link to comment
奇幻树海 Posted March 14, 2022 Share Posted March 14, 2022 I met this problem as I removed some disks that has been included a share, and I resolved this problem by reedit the Shares include disks , after apply ,then revert . Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.