-Daedalus Posted February 19, 2017 Share Posted February 19, 2017 With the understanding that 'appdata' is set to cache only, I understand that these two paths should be functionally identical, but I remember reading something in my early days of unRAID research "something something appdata user shares no-no", but can't find anything about it, despite google-fu. Any reason I shouldn't use /mnt/user for appdata? Quote Link to comment
John_M Posted February 19, 2017 Share Posted February 19, 2017 If you use /mnt/user/appdata it will work even if you don't have a cache disk/pool. Then, if you later add one and the appdata share is set to "cache: Prefer" the mover will move it from the array to the cache. Quote Link to comment
RobJ Posted February 19, 2017 Share Posted February 19, 2017 If you use /mnt/user/appdata it will work even if you don't have a cache disk/pool. Then, if you later add one and the appdata share is set to "cache: Prefer" the mover will move it from the array to the cache. One comment on that though - I was examining the diagnostics here (no obvious issue, but other interesting things), and noticed an example of what happens with the common advice given to use /mnt/user/system for docker.img and libvirt.img. In their syslog, every day, the Mover tries to move /mnt/user/system to the Cache drive, and it may have succeeded with the rest of the files in the system share, but can't move docker.img and libvirt.img because they are always in use. And it's not going to be able to, unless the VM's and dockers are stopped while the array is up. I don't know what other files exist in the system share, but it doesn't seem to me to be a good idea for it to be split up like that, part on the Cache drive and part on Disk 1 (part slower, part faster; part fault tolerant, part not; parts that go together split up). It does work though. I realize Tom did it this way for the general user case, as /mnt/user/system works for everyone, with or without a Cache drive. But if you know you have a Cache drive, it seems more optimal (to me) to set the paths to be /mnt/cache/system and /mnt/cache/appdata, and place the Docker and VM files there (preferably it's one or more SSD's). Plus, it's one less indirection in the data path, no FUSE involvement (most users won't care about that). Quote Link to comment
trurl Posted February 19, 2017 Share Posted February 19, 2017 If you use /mnt/user/appdata it will work even if you don't have a cache disk/pool. Then, if you later add one and the appdata share is set to "cache: Prefer" the mover will move it from the array to the cache. One comment on that though - I was examining the diagnostics here (no obvious issue, but other interesting things), and noticed an example of what happens with the common advice given to use /mnt/user/system for docker.img and libvirt.img. In their syslog, every day, the Mover tries to move /mnt/user/system to the Cache drive, and it may have succeeded with the rest of the files in the system share, but can't move docker.img and libvirt.img because they are always in use. And it's not going to be able to, unless the VM's and dockers are stopped while the array is up. I don't know what other files exist in the system share, but it doesn't seem to me to be a good idea for it to be split up like that, part on the Cache drive and part on Disk 1 (part slower, part faster; part fault tolerant, part not; parts that go together split up). It does work though. I realize Tom did it this way for the general user case, as /mnt/user/system works for everyone, with or without a Cache drive. But if you know you have a Cache drive, it seems more optimal (to me) to set the paths to be /mnt/cache/system and /mnt/cache/appdata, and place the Docker and VM files there (preferably it's one or more SSD's). Plus, it's one less indirection in the data path, no FUSE involvement (most users won't care about that). What a user should do if they add cache after these cache-prefer shares are created is manually run mover with the VMs and docker service stopped. Just another thing that is obvious to some of us but won't be considered by many without more documentation (which some will not read anyway). Quote Link to comment
John_M Posted February 19, 2017 Share Posted February 19, 2017 Very good points. What are your views on the "Cache: Only" vs "Cache: Prefer" settings for user shares? It seems that the introduction of the latter has made the former obsolete. Quote Link to comment
Endy Posted February 19, 2017 Share Posted February 19, 2017 Very good points. What are your views on the "Cache: Only" vs "Cache: Prefer" settings for user shares? It seems that the introduction of the latter has made the former obsolete. I wouldn't say that. A good example is in the video about using a cache share for a Steam Library. That needs to be cache only. That way you have games stored for long term on the array, but active games being stored on the cache drive. Quote Link to comment
trurl Posted February 19, 2017 Share Posted February 19, 2017 Very good points. What are your views on the "Cache: Only" vs "Cache: Prefer" settings for user shares? It seems that the introduction of the latter has made the former obsolete. Not really obsolete. This was recently discussed starting here, with the result that it was added to the FAQ. Quote Link to comment
John_M Posted February 19, 2017 Share Posted February 19, 2017 The table and notes suggest mostly obsolete, then. The gaming example is something of a happy accident, leaving files "trapped" on the cache - I can't believe it was designed that way. Quote Link to comment
RobJ Posted February 19, 2017 Share Posted February 19, 2017 The table and notes suggest mostly obsolete, then. The gaming example is something of a happy accident, leaving files "trapped" on the cache - I can't believe it was designed that way. But it's a useful 'accident', and jonathanm has found another 'off-label' usage. I've added both scenarios to the bottom of that FAQ entry. (I'd appreciate any further ideas and improvements there.) Quote Link to comment
John_M Posted February 19, 2017 Share Posted February 19, 2017 Yes, it could well be useful, actually. I've been thinking about whether I can make use of it myself to keep OS X dot-underscore files that are updated rather more often than the main files with which they are associated on the cache. What I would really like, though, is the option to make the mover simply ignore any files that match "._*" Quote Link to comment
-Daedalus Posted February 19, 2017 Author Share Posted February 19, 2017 So there are no weird quirks with Docker stuff on cache:only user shares then? Think I'll have to move a few of my shares from :only to :prefer... Quote Link to comment
Squid Posted February 20, 2017 Share Posted February 20, 2017 So there are no weird quirks with Docker stuff on cache:only user shares then? Think I'll have to move a few of my shares from :only to :prefer... No issues with using /mnt/user/appdata vs /mnt/cache/appdata. You *may* however run into trouble if the same appdata was previously accessed via /mnt/cache/appdata instead of /mnt/user/appdata (or vice versa) My philosophy is to always use user shares for appdata, and set them to be cache: prefer though. (I actually don't have any cache only shares at all) My reasoning is that I would prefer in the case of the cache drive becoming completely full for all of my applications to still remain operational, which they will if using prefer. When set to only, the apps may fail Quote Link to comment
-Daedalus Posted February 21, 2017 Author Share Posted February 21, 2017 First post following the forum migration. Things are looking pretty nice! Yup, I'm having the same thought RE cache settings. Quote Link to comment
kizer Posted February 21, 2017 Share Posted February 21, 2017 This is kinda funny. I was wrestling with this same problem this weekend. I ended up choosing /mnt/cache/appdata vs /mnt/user/appdata I just figured it was the most direct approach, but then again like said it doesn't really matter either way. Quote Link to comment
CHBMB Posted February 21, 2017 Share Posted February 21, 2017 I have seen odd issues still with the use of /mnt/user/ rather than /mnt/cache/ or /mnt/diskx/ iirc our Plex docker and OpenVPN-AS. I'm with Kizer, specifying the actual disk just sits better with me. Quote Link to comment
Squid Posted February 21, 2017 Share Posted February 21, 2017 I run both of those with zero issues and all links work perfectly. You just can't Willy nilly switch between the two different methodsSent from my LG-D852 using Tapatalk Quote Link to comment
CHBMB Posted February 21, 2017 Share Posted February 21, 2017 2 hours ago, Squid said: I run both of those with zero issues and all links work perfectly. You just can't Willy nilly switch between the two different methods Sent from my LG-D852 using Tapatalk I've definitely helped people who were having problems with fresh installs of those apps that was solved by using /mnt/cache/ rather than /mnt/user/ and it's definitely been since the "fix" but I can't necessarily reproduce it myself. Quote Link to comment
Squid Posted February 21, 2017 Share Posted February 21, 2017 2 hours ago, CHBMB said: I've definitely helped people who were having problems with fresh installs of those apps that was solved by using /mnt/cache/ rather than /mnt/user/ and it's definitely been since the "fix" but I can't necessarily reproduce it myself. Whenever that supposed problem comes up, I bug everyone to post exactly how to reproduce, and no one ever can. During 6.2 Beta days, Tom was very quick to fix any issues that docker container's appdata existing on user shares instead of disk shares. To each their own though. The important thing is that the app runs correctly... Quote Link to comment
dmacias Posted June 1, 2017 Share Posted June 1, 2017 On 2/21/2017 at 2:04 PM, CHBMB said: I've definitely helped people who were having problems with fresh installs of those apps that was solved by using /mnt/cache/ rather than /mnt/user/ and it's definitely been since the "fix" but I can't necessarily reproduce it myself. I just rearranged some drives and decided to try a btrfs cache pool again and combine my 2 120GB SSD's in a raid0 data/raid1 metadata. I copied my appdata back to my cache and started with a fresh docker.img. I chose to use the /mnt/user/appdata for the path. Several dockers kept giving me Disk I/O errors. Mostly those using msqlite databases (Sonarr, Emby) and the HA--Automation-Bridge java jar file wouldn't load either. After a good amount of time restoring backups of files and trying to diagnose the errors I wiped everything and started with a empty cache drive and empty appdata. Clean install of Sonarr gave me the same result, Disk I/O reading the database file. I changed the path to /mnt/cache/appdata and everything worked. I started fresh again, copied all my old cache files over, with a clean docker.img and changed the path to /mnt/cache/appdata. All my apps are up and running in their previous state. The place holder for Default appdata storage location: says e.g./mnt/user/appdata Some dockers would work e.g. Zoneminder, Libresonic, Nexcloud, Mariadb. I don't know if this is related but I always kept my VM's on a separate xfs drive because the Mythbuntu VM would give me I/O errors when I would stop the VM and try and copy the image to the array from a btrfs partition. I tried all kinds of different settings like turning of COW, copying the images back and different image types with the same result. This can be reproduced by installing a fresh copy of Mythbuntu on a btrfs cache drive. Stop the VM then try and copy to array. Quote Link to comment
bnevets27 Posted June 16, 2017 Share Posted June 16, 2017 On 2/19/2017 at 10:40 PM, Squid said: No issues with using /mnt/user/appdata vs /mnt/cache/appdata. You *may* however run into trouble if the same appdata was previously accessed via /mnt/cache/appdata instead of /mnt/user/appdata (or vice versa) I've had the info bellow spamming my log for a while now, not causing any real issue but it does fill the log. I used to use /mnt/cache/ for most/all of my dockers until at some point moved everything to /mnt/user/ (Like on your advice). I'm now thinking that making that change may have caused this issue as you mention it may cause trouble when switching. What would be the way to rectify the issue if making the switch has cause this? /mnt/cache/xxxxxxxxxxxxxxxxxx (39) Directory not empty Quote Link to comment
Squid Posted June 16, 2017 Share Posted June 16, 2017 1 hour ago, bnevets27 said: I've had the info bellow spamming my log for a while now, not causing any real issue but it does fill the log. I used to use /mnt/cache/ for most/all of my dockers until at some point moved everything to /mnt/user/ (Like on your advice). I'm now thinking that making that change may have caused this issue as you mention it may cause trouble when switching. What would be the way to rectify the issue if making the switch has cause this? /mnt/cache/xxxxxxxxxxxxxxxxxx (39) Directory not empty I think that the message is probably from when mover is kicking in. Never really noticed it on my system, and when I see it in any diagnostics here, I just ignore it. If it is mover, just disable the logging for it (I don't really see the need for logging mover to be turned on all the time) Quote Link to comment
bnevets27 Posted June 16, 2017 Share Posted June 16, 2017 (edited) 19 minutes ago, Squid said: I think that the message is probably from when mover is kicking in. Never really noticed it on my system, and when I see it in any diagnostics here, I just ignore it. If it is mover, just disable the logging for it (I don't really see the need for logging mover to be turned on all the time) I don't think its mover. But I can/will disable mover logging. How would you disable mover logging? I can't see a setting for it. I guess more info from the log might be more helpful. Jun 15 21:05:45 Excelsior shfs/user: err: shfs_rmdir: rmdir: /mnt/cache/system/docker/appdata/plex/Library/Application Support/Plex Media Server/Cache/Transcode/Sessions/plex-transcode-xxxxxxxxxxx (39) Directory not empty Jun 15 21:10:48 Excelsior shfs/user: err: shfs_rmdir: rmdir: /mnt/cache/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx (39) Directory not empty Jun 15 21:10:48 Excelsior shfs/user: err: shfs_rmdir: rmdir: /mnt/cache/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx (39) Directory not empty Jun 15 21:11:46 Excelsior shfs/user: err: shfs_rmdir: rmdir: /mnt/cache/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx (39) Directory not empty Jun 15 21:11:46 Excelsior shfs/user: err: shfs_rmdir: rmdir: /mnt/cache/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx (39) Directory not empty Seeing as none of my dockers are pointed at /mnt/cache anymore and seeing /mnt/cache in the logs I figured it could be the trouble that you were referring to. I did get here from the following post: Edited June 16, 2017 by bnevets27 Quote Link to comment
Squid Posted June 16, 2017 Share Posted June 16, 2017 .fuse hidden happens when fuse has a brain fart and is delayed for some reason in deleting a file. They do eventually disappear. Some people have problems with using user shares with plex I don't and others don't no real answer as to why 1 Quote Link to comment
bnevets27 Posted June 16, 2017 Share Posted June 16, 2017 Well it's not just plex, sabnzbd/sickrage etc. when they move files (every single file) I get one (actually 2 or 3 at the exact same time stamp) of those entries in the log. From what I can tell its harmless and no ones has said other wise but it does fill the log file over time. If I could just mute those errors..... Or go back to mnt/cache/? But you said going back and forth *may* cause trouble. What kind of trouble could it cause? Quote Link to comment
zoggy Posted January 22, 2020 Share Posted January 22, 2020 to bring this back from the dead, why cant you switch between the two? (if you stop everything first) -- for example I have a mix mash of both methods across my dockers and want to normalize them to one way Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.