Jump to content
  • [6.0-beta6] User Share Copy Bug


    SSD
    • Closed
    Message added by JorgeB,

    Please be aware that this thread was initially created in 2014, original comments were copied here from another source, because of that, the date and time shown for those comments are not accurate.

    unRAID OS Version: 6.0-beta6 (may go back to the beginning of unRAID)

     

    Description: If a user attempts to copy files from a disk included in a user share, to the user share, even after removing the disk from the user share, the copy will result in the files on the disk to be truncated and data lost. I believe that behind the scenes, FUSE sees that files are being overwritten, and it is trying to copy the files from the disk share on top of themselves, resulting in the data loss.

     

    How to reproduce:

    I have not done this myself. See description.

     

    Other information:




    User Feedback

    Recommended Comments



    [ Knowing it is coming from a disk share and is going back to the same directory as the source would make this about the same level of difficulty as some of the options I listed.

     

    It all depends on where the program is running and if the source / destination are the same inode and/or device id.

     

    With disk mount and user share, they vary as can be seen in the stat information I posted previously.

    This is why cp and mv fail, rsync succeeds because it uses hidden tmp files.

     

    My thought is to do something more intelligent and intercept the open in fuse. 

     

    I.E.  Inspect the type of open, If it's a read/write, then open it in place.

    Or do some kind of lock or fcntl lock look up at the pids to see if the file is in use.

    If so, then write to a temporary file and rename it after close. or version the files.  (I.E. mimic rsync)

    Or Version the files when they are opened as output with truncate.

    Link to comment

    I will work on this some more.  There is a solution but I have to check if it will work with SMB.  What I'm concerned with is this.  If user copies a very large file, say 25GB, I think SMB might do this:

     

    open src file

    read a chunk

    close source file

     

    open tar file

    write chunk

    close tar file

     

    repeat until all chunks written.  I need to check if indeed I see those open/closes in there.....

    Link to comment

    One thing I'd like to add. We are not and cannot be responsible for what someone does to their system via command line. Root access completely eliminates our ability to do that.  My view is that the solution must work for users but only outside the context of a command line method.

    Link to comment

    My view is that the solution must work for users but only outside the context of a command line method.

     

    I tend to agree.  But I know the "Linux guys" will disagree ... they like to use command line tools for various file maintenance.  My view is that if you're using command line tools, you should be aware of things like this and simply avoid doing anything that will invoke the problem.    MOST users of UnRAID never touch the underlying Linux OS.

     

    Assuming this behavior has been in UnRAID "forever" (I plan to test it on a test v4.7 setup this week just for grins),  it's clearly NOT caused major problems ... otherwise you'd have had a LOT of screaming about it over the years.  The simple fact is that most folks don't re-arrange their files on the server; and those that do almost always copy from a disk share to a disk share, to put the files where they want them.    Copying from a disk share that's part of a user share to the same user share is simply not a generally likely function.    ... I haven't tested this (and don't really plan to except for the 4.7 test I noted), but I suspect whether or not the issue occurs also depends on WHICH disk within the user share UnRAID allocates the copy.    In any event, it's akin to copying from a folder to the same folder -- an operation that simply doesn't make sense (and is treated by not allowing it in other OS's ... or by automatically changing the filename).

     

    Link to comment

    One thing I'd like to add. We are not and cannot be responsible for what someone does to their system via command line. Root access completely eliminates our ability to do that.  My view is that the solution must work for users but only outside the context of a command line method.

     

     

    I disagree. cp or mv or some derivative vial perl can be used by other programs and tools.

    scripts, plugins and/or docker apps could possibly do this.

     

     

     

     

    Link to comment

    Clearly a "fix" that completely eliminates the issue would be nice.

     

    But isn't it reasonable to expect those Docker apps, etc. to be "well behaved" ??    If this bug has been there "forever" (which seems likely), it's unlikely it's ever been an issue with any of the add-ons ... otherwise it would surely have surfaced long ago, and the problems it creates would have been FAR more prevalent.

     

    And you will ALWAYS be able to "screw things up" with direct command line utilities  :)

     

     

    Link to comment

    The issue is't that the docker apps are well behaved or not.

    cp and mv protect you from overwriting the same file by checking the stat information.

     

    same device/inode prevents an overwrite.

    These tools fail to protect because the usershare reports different device/inodes.

     

    These tools are well behaved, the core is the disconnect from one filesystem to the other.

    The fuse/usershare layer needs an improvement or a general warning.

     

    If the basic tools can protect you, then any solution we come up with should at least try to do the same.

    To turn your back on not warning people or attempting to provide a potential solution is irresponsible.

     

    We only know how many people have been bitten by this issue because bjp999 has been voicing it.

     

    If midnight commander is included to operate at the command line via telnet, can that be victim of the issue?

     

    FWIW, I'm not expecting a full complete solution in the next release, but we should continue to discuss and try to come up with idea's on the best way to protect people from themselves with the tools that are included in the operating system.

     

    People are going to make mistakes, but the operating system should not add to those without warning.

    • Like 1
    Link to comment

    One thing I'd like to add. We are not and cannot be responsible for what someone does to their system via command line. Root access completely eliminates our ability to do that.  My view is that the solution must work for users but only outside the context of a command line method.

     

    I agree that you cannot protect a user against user error.

     

    But copying data - no one would expect an attempt to copy data would delete the source. I do not believe this falls under the category of user error.

     

    The issue is't that the docker apps are wel behaved or not.

    cp and mv protect you from overwriting the same file by checking the stat information.

     

    same device/inode prevents an overwrite.

    These tools fail to protect because the usershare reports different device/inodes.

     

    These tools are well behaved, the core is the disconnect from one filesystem to the other.

    The fuse/usershare layer needs an improvement or a general warning.

     

    If the basic tools can protect you, then any solution we come up with should at least try to do the same.

    To turn your back on not warning people or attempting to provide a potential solution is irresponsible.

     

    We only know how many people have been bitten by this issue because bjp999 has been voicing it.

     

    If midnight commander is included to operate at the command line via telnet, can that be victim of the issue?

     

    FWIW, I'm not expecting a full complete solution in the next release, but we should continue to discuss and try to come up with idea's on the best way to protect people from themselves with the tools that are included in the operating system.

     

    People are going to make mistakes, but the operating system should not add to those without warning.

     

    Agree with all but the "no full solution in the next release." I don't expect a full solution in the next BETA, but do expect a full solution in the next final release. But the release notes for the next beta should include a warning.

    Link to comment

    But copying data - no one would expect an attempt to copy data would delete the source. I do not believe this falls under the category of user error.

     

    I agree philosophically with that statement.    But I also think no one would think it's reasonable to copy file from a folder onto itself (i.e. the source and destination are identical).    That, in essence, is what's happening when these copies are done ... except that it's not obvious because the references are different [i.e. \\Tower\disk1\MyShare\MyFile  and  \\Tower\MyShare\MyFile ].

     

    Brian =>  You've likely studied this issue more than most.  Do you happen to know if the problem only occurs when the target disk UnRAID chooses within the share happens to be the source disk?    This could be very useful info for Tom in isolating a fix.    Also, doesn't a copy cause a "Create" file open (like it would on Windows)?  If so, then it should be fairly easy to resolve this like Windows does, by simply appending a suffix to the new filename.

     

     

    Link to comment

    But copying data - no one would expect an attempt to copy data would delete the source. I do not believe this falls under the category of user error.

     

    I agree philosophically with that statement.    But I also think no one would think it's reasonable to copy file from a folder onto itself (i.e. the source and destination are identical).    That, in essence, is what's happening when these copies are done ... except that it's not obvious because the references are different [i.e. \\Tower\disk1\MyShare\MyFile  and  \\Tower\MyShare\MyFile ].

     

    It may be what is happening TECHNICALLY, but the physical implementation of the feature is not something the user could reasonably be expected to understand.

     

    Brian =>  You've likely studied this issue more than most.  Do you happen to know if the problem only occurs when the target disk UnRAID chooses within the share happens to be the source disk?    This could be very useful info for Tom in isolating a fix.    Also, doesn't a copy cause a "Create" file open (like it would on Windows)?  If so, then it should be fairly easy to resolve this like Windows does, by simply appending a suffix to the new filename.

     

    The following is not at all clear (IMO) about user shares. This is not explained anywhere that I know of, and most users do not understand #1 and #3:

     

    1 - ALL disks that have a root directory with the name of the user share are included in the user share, even if explicitly excluded or left off of the include list.

     

    2 - If a new file is copied to a user share, it will be written to one of the included (or not excluded disks) following the allocation method and split level setting.

     

    3 - If a file is written to a user share with the same path and filename as already exists on the user share, it will OVERWRITE that file, even if it is on an excluded (or not included) disk.

     

    I am not adverse to appending something when overwriting a file, if you can eliminate some of the weird scenarios (i.e., Windows prompts whether to overwrite or rename, user clicks overwrite, and unRAID renames it anyway).

     

    Tom said he was going to explore something with Samba to see if the file is opened once or multiple times. That could have a big impact on a solution.

     

     

    http://lime-technology.com/forum/index.php?topic=34480.msg321241#msg321241

     

    There is no Samba fix in the world that will fix this.

    Link to comment

    Here's a command line example of the problem. Note, I am not root. root access is not the issue.

     

    rcotrone@unRAID:/mnt/disk1/TMP$ ls -l datafile.zip
    -rw-rw-rw- 1 rcotrone users 282096032 2014-08-10 14:34 datafile.zip
    
    rcotrone@unRAID:/mnt/disk1/TMP$ cp datafile.zip datafile.zip
    cp: `datafile.zip' and `datafile.zip' are the same file
    
    rcotrone@unRAID:/mnt/disk1/TMP$ mv datafile.zip datafile.zip 
    mv: `datafile.zip' and `datafile.zip' are the same file
    
    rcotrone@unRAID:/mnt/disk1/TMP$ cd /mnt/user/TMP
    
    rcotrone@unRAID:/mnt/user/TMP$ ls -l datafile.zip
    -rw-rw-rw- 1 rcotrone users 282096032 2014-08-10 14:34 datafile.zip
    
    rcotrone@unRAID:/mnt/user/TMP$ cp datafile.zip datafile.zip
    cp: `datafile.zip' and `datafile.zip' are the same file
    
    rcotrone@unRAID:/mnt/user/TMP$ mv datafile.zip datafile.zip
    mv: `datafile.zip' and `datafile.zip' are the same file
    
    rcotrone@unRAID:/mnt/disk1/TMP$ cp datafile.zip /mnt/user/TMP/datafile.zip
    
    rcotrone@unRAID:/mnt/disk1/TMP$ ls -l datafile.zip
    -rw-rw-rw- 1 rcotrone users 0 2014-08-10 14:35 datafile.zip
    
    rcotrone@unRAID:/mnt/disk1/TMP$ mv -v datafile.zip /mnt/user/TMP/datafile.zip
    `datafile.zip' -> `/mnt/user/TMP/datafile.zip'
    mv: cannot open `datafile.zip' for reading: No such file or directory
    
    rcotrone@unRAID:/mnt/disk1/TMP$ ls -l datafile.zip
    /bin/ls: cannot access datafile.zip: No such file or directory
    
    rcotrone@unRAID:/mnt/disk1/TMP$ cp VPS_048.zip datafile.zip
    
    The device and inodes are different which is part of the issue.
    
    rcotrone@unRAID:/mnt/disk1/TMP$ stat datafile.zip 
      File: `datafile.zip'
      Size: 282096032       Blocks: 551514     IO Block: 4096   regular file
    Device: 901h/2305d      Inode: 718782      Links: 1
    Access: (0666/-rw-rw-rw-)  Uid: ( 1000/rcotrone)   Gid: (  100/   users)
    Access: 2014-08-10 14:37:49.000000000 -0400
    Modify: 2014-08-10 14:37:50.000000000 -0400
    Change: 2014-08-10 14:37:50.000000000 -0400
    
    rcotrone@unRAID:/mnt/disk1/TMP$ stat /mnt/user/TMP/datafile.zip 
      File: `/mnt/user/TMP/datafile.zip'
      Size: 282096032       Blocks: 551514     IO Block: 131072 regular file
    Device: 10h/16d Inode: 165509      Links: 1
    Access: (0666/-rw-rw-rw-)  Uid: ( 1000/rcotrone)   Gid: (  100/   users)
    Access: 2014-08-10 14:37:49.000000000 -0400
    Modify: 2014-08-10 14:37:50.000000000 -0400
    Change: 2014-08-10 14:37:50.000000000 -0400
    

     

    This test should probably be done with a samba mount and/or a NFS mount.

    Anyway, the protection is at the application level, FUSE obfuscates any potential application protection.

     

    At the very least, if a matching share directory is 'excluded' or not explicitly 'included' perhaps there should be some priority adjustment in that files having the same name cannot be overwritten via an open create.

     

    I.E. It takes an explicit definition in order for a user share file to be the target of an open create.

    Link to comment

    Tom said he was going to explore something with Samba to see if the file is opened once or multiple times. That could have a big impact on a solution.

     

    http://lime-technology.com/forum/index.php?topic=34480.msg321241#msg321241

     

    There is no Samba fix in the world that will fix this.

     

    I'm aware of the root cause. I'm aware that samba is not the cause and samba cannot fix this.

     

    Point of research is,  if file locking is used in any manner, and samba keeps opening and closing the file for chunks, any file locking or pid file lookup in FUSE/usersharefs via fuser will be useless.

     

    File locking or open file lookup to see who has the file open could be one preventative solution.

     

    cp and mv use the stat information, FUSE cannot, therefore FUSE on open needs to do something more intelligent which may mean checking for a file lock, open in use or some other method. I.E. Priority of disk share, Explicit vs implicit include/exclude. 

     

    There are a couple ways this can go, in any case the usershare/FUSE needs an adjustment and that may depend on how Samba access'es the files as per Tom's concern.

    Link to comment

    Ok, this has all come back to me.... I am aware of this issue, and it's on a real old 'todo' list.

     

    The problem is that 'cp' and 'mv' and friends execute a 'stat' on each source and target, to check among other things, if it's the same file.  It does this for example, to check if two links (hard or soft) point to the same file.

     

    To do this it checks both the st_dev field and the st_ino field in the stat structure.  The 'system' fills in st_dev and in our caes, FUSE fills in st_ino.  It is possible to tell FUSE to pass through the st_ino field from the underlying branch (this is the 'use_ino' FUSE option).  This does not solve the problem however, because st_dev is still filled in by the 'stat' code outside of FUSE.  There's no way to 'spoof' the st_dev field to pass through the branch st_dev field - doing so is not advisable anyway because this can cause other issues.

     

    Fundamentally then, it's not possible to solve this problem given that we want to retain the option of referencing the same file via two different file systems.  Other union-like filesystems solve this by not making the individual branches accessible.  I guess this is not an option for unRaid because it can break how lots of ppl are using disk/user shares.

     

    There is another solution which I've wanted to explore because it helps with NFS stale file handles.  What I would do is make a change in shfs that made every single file look like a symlink!  So if you did an 'ls' on /mnt/user/someshare then what you'd see is all directories normal, but all files are symlinks to the "real" file on a disk share.  this would be completely transparent to SMB and NFS, and would eliminate need to 'cache' FUSE nodes corresponding to files for sake of stopping stale file handle errors.  I dunno, might throw this in as an experimental option...

    Link to comment

    Ok, this has all come back to me.... I am aware of this issue, and it's on a real old 'todo' list.

     

    The problem is that 'cp' and 'mv' and friends execute a 'stat' on each source and target, to check among other things, if it's the same file.  It does this for example, to check if two links (hard or soft) point to the same file.

     

    To do this it checks both the st_dev field and the st_ino field in the stat structure.  The 'system' fills in st_dev and in our caes, FUSE fills in st_ino.  It is possible to tell FUSE to pass through the st_ino field from the underlying branch (this is the 'use_ino' FUSE option).  This does not solve the problem however, because st_dev is still filled in by the 'stat' code outside of FUSE.  There's no way to 'spoof' the st_dev field to pass through the branch st_dev field - doing so is not advisable anyway because this can cause other issues.

     

    Fundamentally then, it's not possible to solve this problem given that we want to retain the option of referencing the same file via two different file systems.  Other union-like filesystems solve this by not making the individual branches accessible.  I guess this is not an option for unRaid because it can break how lots of ppl are using disk/user shares.

     

    There is another solution which I've wanted to explore because it helps with NFS stale file handles.  What I would do is make a change in shfs that made every single file look like a symlink!  So if you did an 'ls' on /mnt/user/someshare then what you'd see is all directories normal, but all files are symlinks to the "real" file on a disk share.  this would be completely transparent to SMB and NFS, and would eliminate need to 'cache' FUSE nodes corresponding to files for sake of stopping stale file handle errors.  I dunno, might throw this in as an experimental option...

     

    Thanks Tom. This is the type of analysis that was needed. It is disappointing, but I understand you are using a combination of your logic and FUSE to put the user share feature together, and that some things are possible and others are not.  The last thing you mention (user shares look like symlinks) sounds promising, and I'm hopeful that will be a solution.

     

    A few suggestions related to the user share solution:

    1 - When enabling user shares, pop up a one time informational message explaining the user share feature and this potential gotcha. Users should be warned that copying files from disk shares to user shares, and user shares to disk shares (i think the same thing could happen) should be avoided.

    2 - Add add'l info on the user share configuration screen that makes the feature and options more clear. Include a read-only field that indicates what disks are REALLY in the share (for read purposes), and clearly label "include" and "exclude" fields only limit writes of new files.

    3 - If a user excludes a disk that is part of the user share, pop up a warning that in order to truly exclude a disk, you would need to rename its root level folder. I am not even sure if that is enough. Would you have to bounce the array afterwards?

    Link to comment

    Fundamentally then, it's not possible to solve this problem given that we want to retain the option of referencing the same file via two different file systems.

     

    Well stated.  So functionally it's even worse than copying a file from a folder to the same folder, since neither the copying utility nor the OS natively recognizes that this is happening.

     

     

    There is another solution which I've wanted to explore because it helps with NFS stale file handles.  What I would do is make a change in shfs that made every single file look like a symlink!  So if you did an 'ls' on /mnt/user/someshare then what you'd see is all directories normal, but all files are symlinks to the "real" file on a disk share.  this would be completely transparent to SMB and NFS, and would eliminate need to 'cache' FUSE nodes corresponding to files for sake of stopping stale file handle errors.  I dunno, might throw this in as an experimental option...

     

    It'd be nice if this actually resolves the issue.    But I DO think the reality is it's a fairly rare occurrence that can be "resolved" through education and, as Brian suggested, some clear warnings on the User Share page about the possible data loss if copying from a disk share to the parent user share that includes it.

     

    You may want to include an option to make a share "Read Only" -- with a note outlining how this would eliminate the possibility of this issue occurring.  Clearly there are downsides to that ... but for those who write their new content to disk shares, and don't use any utilities that require write access to the user shares, this would eliminate any possibility of encountering this problem.

     

    Link to comment

    Fundamentally then, it's not possible to solve this problem given that we want to retain the option of referencing the same file via two different file systems.

     

    Well stated.  So functionally it's even worse than copying a file from a folder to the same folder, since neither the copying utility nor the OS natively recognizes that this is happening.

     

     

    There is another solution which I've wanted to explore because it helps with NFS stale file handles.  What I would do is make a change in shfs that made every single file look like a symlink!  So if you did an 'ls' on /mnt/user/someshare then what you'd see is all directories normal, but all files are symlinks to the "real" file on a disk share.  this would be completely transparent to SMB and NFS, and would eliminate need to 'cache' FUSE nodes corresponding to files for sake of stopping stale file handle errors.  I dunno, might throw this in as an experimental option...

     

    It'd be nice if this actually resolves the issue.    But I DO think the reality is it's a fairly rare occurrence that can be "resolved" through education and, as Brian suggested, some clear warnings on the User Share page about the possible data loss if copying from a disk share to the parent user share that includes it.

     

    You may want to include an option to make a share "Read Only" -- with a note outlining how this would eliminate the possibility of this issue occurring.  Clearly there are downsides to that ... but for those who write their new content to disk shares, and don't use any utilities that require write access to the user shares, this would eliminate any possibility of encountering this problem.

     

    I don't think this read only approach would work. If the share is read only, and you copy a file from the share to the disk it is sourced from, I still think its get clobbered. tell me if I'm wrong.

    Link to comment

    A few suggestions related to the user share solution:

    1 - When enabling user shares, pop up a one time informational message explaining the user share feature and this potential gotcha. Users should be warned that copying files from disk shares to user shares, and user shares to disk shares (i think the same thing could happen) should be avoided.

    2 - Add add'l info on the user share configuration screen that makes the feature and options more clear. Include a read-only field that indicates what disks are REALLY in the share (for read purposes), and clearly label "include" and "exclude" fields only limit writes of new files.

    3 - If a user excludes a disk that is part of the user share, pop up a warning that in order to truly exclude a disk, you would need to rename its root level folder. I am not even sure if that is enough. Would you have to bounce the array afterwards?

     

    Is it possible to eliminate a disk from the group by turning off the implied folders in a user share by some option for PER user share?

     

    While you can have include/exclude, there still is the implied addition to a user share.

     

    The warnings are a good idea. Maybe not a pop up but text directly on the user share configuration page.

    Something that cannot be ignored.

    Link to comment

    There is another solution which I've wanted to explore because it helps with NFS stale file handles.  What I would do is make a change in shfs that made every single file look like a symlink!  So if you did an 'ls' on /mnt/user/someshare then what you'd see is all directories normal, but all files are symlinks to the "real" file on a disk share.  this would be completely transparent to SMB and NFS, and would eliminate need to 'cache' FUSE nodes corresponding to files for sake of stopping stale file handle errors.  I dunno, might throw this in as an experimental option...

     

    I like this option.

     

    Will the symlink point to /mnt/disk1/someshare and does that require the disk share to be exposed or is it only at the operating system level.

     

    This could be a good solution.

    Might break something for someone, so having the option of current or new symlink might be needed for a while.

    Link to comment

    I don't think this read only approach would work. If the share is read only, and you copy a file from the share to the disk it is sourced from, I still think its get clobbered. tell me if I'm wrong.

     

    Simple enough to try that ... so just for grins, I tried this on my primary (v5.0.5) system.    You're correct -- the file gets clobbered (it's still there, it's just empty).    Obviously my share isn't read-only, so there's some chance that the behavior might be different if it was -- but it does seem likely that the issue is still there.

     

    By the way, doing this DOES give the Windows prompt noting the file already exists, and asks if you want to Copy and Replace the old file; Copy but keep both files; or Not Copy the file.    The problem only happens if you select the first option.

     

     

    Link to comment

    Brian => Now that I did that test on a "real" system (instead of the test one I haven't yet added drives to) ... is there any other known impact except for losing the data from the file that was clobbered?

     

    Just wondering if there's anything I should check  :)

    [i guess I'll run a parity check just for grins to be sure parity was maintained okay in the process.]

    Link to comment

    Brian => Now that I did that test on a "real" system (instead of the test one I haven't yet added drives to) ... is there any other known impact except for losing the data from the file that was clobbered?

     

    Just wondering if there's anything I should check  :)

    [i guess I'll run a parity check just for grins to be sure parity was maintained okay in the process.]

     

    I doubt it would disrupt parity but certainly no harm doing a parity check.. You might also check the filesystem to make sure this didn't confuse RFS.

    Link to comment

    Brian => Now that I did that test on a "real" system (instead of the test one I haven't yet added drives to) ... is there any other known impact except for losing the data from the file that was clobbered?

     

    Just wondering if there's anything I should check  :)

    [i guess I'll run a parity check just for grins to be sure parity was maintained okay in the process.]

     

    I doubt it would disrupt parity but certainly no harm doing a parity check.. You might also check the filesystem to make sure this didn't confuse RFS.

     

     

    I don't think it's at this layer guys. With cp and mv, the disk to usershare protection is bypassed.

    The application makes the mistake because of the data returned by FUSE/usershare.

    According to the application, each file is different enough to allow the overwrite.

    i.e. an open create,truncate, then an open read. By the time the read succeeds, the open create has truncated the file.

    with mv, it looks like the file is unlinked before the open read, so the file can disappear.

    Link to comment

    I did a parity check overnight and it was fine, so I'll assume nothing else was harmed.

     

    It WOULD be nice if this was fixed, but I think it's a pretty rare thing ... otherwise this would have surfaced as a MAJOR bug a long time ago.

     

    However, I suspect it's a bit more likely now with drives becoming so large, as some folks want to reduce their drive count and use larger drives, so they pop in a big (4-6TB) drive, and then, before adding it to a share, copy a lot of (or even the entire contents of) the share to the new drive.    This wouldn't be an issue if they didn't use the same top-level folder name; but if they do, clearly this could have very bad results.    And for those that aren't backed up, it would be even worse !!

     

    Not sure what logic would be involved within Linux to implement it, but it seems this could be resolved with the following:  Do NOT allow a write command if the filename is already open and EITHER the open file or the requested write is being accessed via a User Share.    This would not require read-only shares;  would allow easy movement of files between disks by using the disk shares;  would still allow normal writes to the User shares; etc.    This could, of course, get a bit tricky -- if, for example, the read is being accessed via a user share, but the destination is NOT to a folder with the same name as the share, then it's actually okay to do the copy ... so to implement it right there'd also have to be a check r.e. whether or not the two files involved were in the same share.

     

    I wouldn't put this at the top of Tom's "fix list" ... but I WOULD put a warning about it on the User Shares page so folks know it's an issue.

     

    Link to comment

    Tom's symlink approach could resolve this nicely. Since the original file will be referenced directly to the original.

    A caveat may be slower access to large user shares with lots of files.

     

    Since a symlink is actually a type of file that references the original file, it has to be stated, opened, readlink, closed before any other filesystem operation.

     

    There's a code example here.

    http://linux.die.net/man/2/readlink

     

    On the pro side, when traversing a user share with ftw or find as in cache_dirs, the original file and the usershare file(symlink) will be in the caches.

     

    On top of that, if this reduces the NFS stale handle issue, that's two issued handled.

    Link to comment



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.

×
×
  • Create New...