wuftymerguftyguff

Members
  • Posts

    9
  • Joined

  • Last visited

Everything posted by wuftymerguftyguff

  1. Hi, Updated diagnostics attached. (with mover logging in place) I emptied my cache again, by stopping all vms and dockers and their respective services, getting out of all the user shares, seeing the appdata from Cache:Prefer to cache:Yes and running the mover. I had a look at the mover script, it seems to be using find in depth mode, and then handing this off to move that is not a script so i can't see what it i really doing (without work) Once again all that remains under appdata is broken symlinks. The dockers still work then they access things via /mnt/user mount but fail otherwise. If I copy thing smanually using rsync the links move an everything is happy. I think that as a test case you could try using a default install of binhex-krusader docker. This one seems to have the problem every single time, whether moving from cache to array or back again. Thanks again for your time Jeff cosmos-diagnostics-20210327-2335.zip
  2. Update, The move of app data share from the cache completed and left LOTS of symlinks and dirs behind in the cache. This is really starting to look like the moved is not behaving as I understand that it should. I now have the problem causing issues for 4 dockers (plex, letsencrypt,swag and binhex-krusader) The ONLY things remaining in the cache for appdata after the mover has finished are broken symlinks and the directories containing them. This is a count of broken links under appdata in in the cache root@cosmos:/mnt/cache/appdata# find . -xtype l | wc -l 33442 This is a count of things that are NOT a directory or a broken link under appdata in the cache. oot@cosmos:/mnt/cache/appdata# find . ! -xtype l,d | wc -l 0 It looks like the mover is NOT treating these links as links and copying them as is, it seems to be trying to follow the links and in the case where the target has already been moved this fails and the links and structure remain on the source. Updated diags attached. As this appears to be a generic problem relating to the mover and symlinks, not anything to do with docker and appdata specifically then I will work on a simpler test case that we can us from now on. That will allow me to stop messing with my docker and VM workload. cosmos-diagnostics-20210324-0946.zip
  3. OK, thanks for the spot. I had certainly not noticed that. My appdata was back on the cache (with my manual interventions) I restarted in maint. mode and ran xfs_repair -v /dev/md0 This fixed some dir entries and completed. I restarted the array normally. Rebooted to clear out my logs I have now set about trying to recreate my problem by setting the appdata share to "Yes" and the mover is running and is draining my cache to drive 3 by the look of things. I will update this thread when it completes. Thanks again for your attention.
  4. Hi, Thanks for your attention. My problem is that I was trying to empty the cache. VMs and dockers were stopped, the mover should have moved everything shouldn’t it? It should have moved the symlink as a symlink shouldn’t it? Or do I have a fundamental gap in my understanding here?
  5. Well, as there were no replies, there wasn’t much to de distracted from! At 6.9.1 does mover deal with symlinks in shares properly at all? Maybe it not just a docker thing? Sent from my iPhone using Tapatalk
  6. Hi, Running 6.9.1. I have some repeatable behavior that I can't explain and I would appreciate your input. I typically run my appdata, system and domains shares with cache set to prefer. I recently decided to move all these shares back to the array in preparation for some work I am doing, so I followed the wisdom of the FAQ, stopped docker and vms in settings, changed my shares to Cache:Yes and ran the mover. It ran for a while and moved almost all the data from the cache to the array. However it did not totally empty the cache. There are a few "files" remaining. The cache reports it has ~3.5G still in use. I have had a poke around. and the issues seem to be related to files belonging to the following dockers letsencrypt binhex-krusader swag My first observation is that the space calculation seems weird as there are a lot less "files" in there than 3.5GB. If I take swag as my example I notice that this docker makes use of symlinks. So the problem path for swag is a broken symlink that does not exist. root@cosmos:/mnt/cache/appdata/swag/keys# ls -l total 0 lrwxrwxrwx 1 nobody users 38 Mar 15 21:31 letsencrypt -> ../etc/letsencrypt/live/obfuscated-domain.co.uk When I go and poke into the other problem dockers they also all seem to have broken symlinks. So these 3 "problem" dockers all make use of relative symlinks in their docker volumes. So my question is this. Does the mover copy symlinks as links? Or does it follow links? It looks to me like maybe it is trying to follow links, but as the target has already been moved. This is backed up my the fact that the target that the symlink points to exists on the array. (not that that really matters a broken symlink should be copied as a broken symlink anyway, even if the target does not exist) It has broken the dockers involved here (not that i really care, I can sort them out), but I am worried that the mover seems to be struggling with relative symlinks. between the cache and the array. If I use Unbalance plugin, or use rsync myself then the move works as I would expect. Anyone else seeing this or anything like it? Thanks in advance for your time and efforts. Diags attached for your grepping pleasure. UPDATE: Exactly same behaviour when using the mover to get back onto the cache, same 3 dockers, same issues with symlinks. cosmos-diagnostics-20210319-0912.zip
  7. Hi, Thanks for your attention. I understand but what I am seeing is slightly different. Forcing smb v1 does not work either. If there is ANY value in the vers= option then I can’t write. If I take the mount command from syslog and just remove the vers option it negotiates the highest compatible smb version with the server and I can write. Jeff
  8. I have an issue where SMB mounts to my Drobo are not allowing writes. This has been working for years with no issues. Mounting the same share with the same username and password on the drobo from Mac, Windows and RasPi (Raspbian) are all fine. If i manually do the cifs mount without the vers=3.0 in the command line I can write to the share successfully. Was vers=3 added to the mount command in UD recently? Or has it become a default in UnRaid Recently?
  9. Hi, I am evaluating unraid for my personal use at home. My intention is to run my various services in docker images where possible and have a couple of VMs that I can run occasionally when i need to do other things. One of the tasks I want to do int VM is to be able to do backup media ripping, cd, dvd, blu ray inc UHD 4K HDR. (My 3 year old is not very forgiving of physical media) Booting win10 with the same hardware outside of unraid works faultlessly, By following the various guides I seem to be able to get the pass thru of my ASUS BC12D2HT to a win10 VM in UNRAID work, but only for one boot. The only way to get it to work is to copy the xml from the last VM to a new custom vm, change the UID and VM name and create a new one. Also the SCSI address of the drive seems to jump around over time when unraid is fully restarted. So far I have seen it ar 2:0:0:0, 4:0:0:0, 5:0:0:0, 6:0:0:0 and 8:0:0:0. This means I have to adapt the XML and move to a new vm again. Is there any way to make this reliable and consistent? Running all my services in docker under unraid is really attractive, but not if I have to shut it down and start replugging drives when I need to do specific tasks. Please help Jeff