-Q-

Members
  • Posts

    7
  • Joined

  • Last visited

Everything posted by -Q-

  1. Because the metadata of a file is itself data in the same way its contents is e.g. its name, size and creation date etc. So if the Recycle Bin is there to protect data from accidental deletion then it should protect the metadata from deletion as well. Also, sometimes some content-type data is stored in the file name rather than within the file so some operations can be performed with just directory access rather than opening each file. An example is hMailServer which downloads each email into a file named <GUID>.eml. Granted the files in this case should never logically have zero size, but I have seen them with zero size when there's been a disk problem. Another good example where zero length files could be significant is if a program's data or installation folder gets deleted by accident. I might have no idea whether there are zero length files in there or not or if any zero length ones have any meaning - I would just go to the Recycle Bin and recover the whole lot. I would also say it is more intuitive to have all files protected - I assumed they were until I read this post! By the way, I'm not criticising Recycle Bin as it stands - it's still a great addition to unRAID - I'm just answering the question by explaining why I would bother to protect empty files.
  2. Like you I tried the "core" Docker to start with and it worked, but I soon wanted the add-on functionality provided by the supervisor. So I tried the unsupported "hassio_supervisor Docker", but found it unreliable - it would start for a while, but then stop for no obvious reason. I looked into it for a bit, but soon decided that a supported option was a better way to go so I set up the VM version, which has been very stable since I first got it running several months ago. I know it's an extra layer of abstraction over a docker container, but HA is a light-weight system anyway so I don't think much performance is wasted. It's still a lot faster than running it on a Pi from what I can gather anyway. You may be aware that the HA supervisor setup uses docker itself so that might be something to do with why it's not happy running within a container. When the docker version of HA supervisor created its containers I could see and manage them within unRAID, but I don't know enough about Docker to know if this is odd. It's obviously not an issue with a VM, however if there's an option to add something to HA like an external database, then I check to see if there's an unRAID version of the docker first, so I minimise the number of dockers running inside the VM.
  3. Thanks for explaining to trurl and for the fix - that -delete switch is a neat solution since it also enables -depth (strange word to use I thought) so files are listed before directories and therefore all (aged) empty directories are removed in one pass (in a much simpler way than I mentioned above).
  4. Sorry to labour my point then, but there's still quite a bad bug in the plugin. Using rm with only -r will remove any directory passed to it whether it is empty or not. Using rm with only -d will only remove a directory passed to it if it is empty. So using both together means -r will just remove all directories and -d will never get the chance to check whether any are empty or not. It's not like -d will stop -r removing a directory if it's not empty. I confirmed this by entering the following in the unRAID terminal window: root@Tower:~# mkdir test root@Tower:~# ls > test/a.txt root@Tower:~# rm test rm: cannot remove 'test': Is a directory root@Tower:~# rm -d test rm: cannot remove 'test': Directory not empty root@Tower:~# rm -rd test root@Tower:~# See how the final command which uses -rd just removes the directory even though it still contains a.txt. Aside from how -d and -r work together though, the actual bug is caused by using -r. I guess the key point to realise is that if there is a directory in the Recycle Bin, it can be there for one of two reasons: 1) it's a directory that got deleted; 2) it's a directory that got created in the bin just to hold something else that got deleted in order to preserve the path to the deleted item(s). The first case implies it can be deleted once it expires because its content will age with it, but the second case means it has to stay around until its purpose of holding other files and directories is complete, which just happens to be nicely indicated when it becomes empty. So all that's needed is to use -d without -r. Note that these 'extra old' directories won't get removed during the same run that they become empty due to the order that the 'find' command returns items i.e. parent directories before contents. But they will get removed on the run following the one when they become empty. For a tree 5 levels deep though, it will take 5 runs to remove (or thereabouts depending on what you count as the first level). I think it's good enough to just leave this slight bug in place, but you could pipe the output of 'find' into 'tac' to reverse it and then into 'xargs' to run the 'rm' command since you wouldn't then be able to use the '-exec' switch of 'find'. Like I said though, only needed if you're a perfectionist!
  5. I've just noticed that you've released a new version of the plugin with the addition of the -d switch as I mentioned above, so thanks for your effort here. However, you've left in place the -r switch, which means the addition of the -d switch has no effect. Perhaps you could explain if you intended this or if it is perhaps a typo? By the way, my logging problem went away while I was investigating it - to start with there were no samba audit messages being logged and then they started again after stopping and starting the array I think. So I'll keep an eye on it and see if anything gives itself away later.
  6. Exactly - so rather than pass the share into the VM via QEMU, just log in to the VM and map to the share from there exactly as if it was a physical machine elsewhere on the network. Also, as others have mentioned, I've found that if you just create a file and then immediately delete it, the recycle bin doesn't catch it. I'm not sure how long it has to be there, but I think someone said about 20 minutes earlier in this thread.
  7. I think there may be a bug in the remove function (vfs_recycle_remove) in the script file /usr/local/emhttp/plugins/recycle.bin/scripts/rc.recycle.bin. The 'find' command is used to list all files and directories that have been in the recycle bin longer than the configured number of days (by checking the 'last accessed time' which is set when an item is moved to the recycle bin). Each resulting file and directory is then passed to 'rm -rf' to permanently delete it. The trouble is that when a file is deleted from a directory that already exists in the recycle bin from a previous deletion from that directory, the directory does not have its 'last access time' updated. This means that the 'find' command can list a directory that has been in the recycle bin longer than the configured number of days even if it contains a file that has not been there for as long. The 'rm' command with the '-r' switch then removes such directories, including any contents that have not been in the recycle bin long enough. It seems to me that since the 'find' command is recursing into directories then the 'rm' need not do this as well. However, just removing the '-r' switch will leave empty directories behind, but replacing it with '-d' (to allow 'rm' to remove empty directories) seems to work. Perhaps the author could let me know if they agree with my analysis or if I've got something wrong. I was prompted to investigate this because I'm getting files being removed from my recycle bins before they have been in there long enough. Note that I'm also not currently getting a log of the files that are being deleted even though I have enabled that option and have seen them in the past, possibly with an earlier version of Recycle Bin. The script file doesn't seem to have a version within it, but unRAID shows the plugin as a whole as up-to-date at version 2020.07.08c.