.secret_squirrel Posted October 5, 2014 Share Posted October 5, 2014 Hello Lime Tech Community! Since updating to 6b10a, I noticed I now have two user shares: /mnt/user & /mnt/user0. A command line diff between the two shares reveals they are equivalent. The logs further reveal two shfs commands that I was hoping to understand, but oddly enough searching got me nowhere. Some questions I had are, [*]What exactly is the -disks parameter, and why might two varying ranges be used to create multiple shares? [*]What is the "-o remember" on the second shfs command? My intuition tells me I have something configured incorrectly, and this is the side effect. Apologies if this was covered elsewhere, but does anyone else have insight they can offer to help me figure out what is going on? Relevant log entries that establish the two user shares: Sep 29 19:16:06 unraid-server emhttp: shcmd (26): mkdir /mnt/user0 Sep 29 19:16:07 unraid-server emhttp: shcmd (27): /usr/local/sbin/shfs /mnt/user0 -disks 16777214 -o noatime,big_writes,allow_other |& logger Sep 29 19:16:07 unraid-server emhttp: shcmd (28): mkdir /mnt/user Sep 29 19:16:07 unraid-server emhttp: shcmd (29): /usr/local/sbin/shfs /mnt/user -disks 16777215 2048000000 -o noatime,big_writes,allow_other -o remember=0 |& logger Link to comment
trurl Posted October 5, 2014 Share Posted October 5, 2014 /mnt/user is the user shares including any files that may still be on cache. /mnt/user0 is the user shares excluding any files that may still be on cache. This is normal unRAID housekeeping. Link to comment
.secret_squirrel Posted October 5, 2014 Author Share Posted October 5, 2014 Thank you trurl. For clarification sake, a scenario where user and user0 will have differences is if you have files on your cache drive (or pool) that haven't been moved to a parity protected drive yet? If that is true, since I don't have a cache drive they will always be equivalent? Link to comment
jphipps Posted October 5, 2014 Share Posted October 5, 2014 I believe the /mnt/user0 is only really used by the mover script to migrate data off cache, you don't every reference that directly. My guess is if you look through all your share settings, one as cache drive turned on. I had that on my server after upgrading, I didn't have a cache drive, but it was turned on on a couple of shares for some reason... Link to comment
Recommended Posts
Archived
This topic is now archived and is closed to further replies.