johnlabod

Members
  • Posts

    2
  • Joined

  • Last visited

Everything posted by johnlabod

  1. Sorry, I did do that I just accidentally left it out of my original post. I have edited it. My workaround right now is to add together the size of every file and add all the filenames into an array. Once the accumulated size is over 2GB then I create a tar archive and run through the array to make something like this: tar -cf archive.tar file1 file2 file3 This works. I took a look at tar's source and it seems it seems to be a problem with blocking. If you run tar -rf archive.tar file2 within a shfs filesystem and then list out the contents of the file: cat archive.tar You will see this: file10000666000000000000000000000000513614311576010505 0ustar rootrootfile file10000666000000000000000000000000513614311576010505 0ustar rootrootfile file20000666000000000000000000000000513614311577010507 0ustar rootrootfile Tar doesn't know where to start rewriting the file, so it starts at the very first block. This makes a duplicate of the first file which messes up the entire archive. I still am not sure why this is happening specifically with this filesystem. I found a repository for shfs but it seems it hasn't been updated since 2004. I assume that Limetech probably has their own private branch. If that is the case and the source is closed off then I am afraid that this is a deadend.
  2. Hello all, I just recently got a license for Unraid and I was trying to accomplish a small plugin to backup my server to a storage bucket online. To accomplish this I decided to separate the files I wanted to back up into separate tar archives and send them one at a time. This was to prevent creating one large tar file of all my files and taking up a lot of space on my system. Anyway I have been running into this problem, I create an archive in /mnt/user/tempdir but every file that I append to this archive will not be added. To recreate this issue do this: cd /mnt/user/ mkdir tempdir cd tempdir touch file1 file2 # create the archive tar -cf archive.tar file1 # list the files tar -tf archive.tar file1 # append the file2 to the existing archive tar -rf archive.tar file2 # list the files again, but still only file1 is part of the archive tar -tf archive.tar file1 The correct output would be that the second time you list the files with `tar -tf archive.tar` it should be file1 file2 But it is not. But, if I change the directory to /mnt/disk1/ the operation is successful. Such as this: cd /mnt/disk1/ mkdir tempdir cd tempdir touch file1 file2 tar -cf archive.tar file1 tar -tf archive.tar file1 tar -rf archive.tar file2 tar -tf archive.tar file1 file2 I see that Unraid uses shfs to mount all the drives to /mnt/user and I have to admit I have never heard of this filesystem before. Anyway my question is what the heck could be causing tar's append command to fail within the shfs filesystem? Does anyone have any ideas? I'm not even really sure where to start. Edit: Also, if anyone has any suggestions on how to accomplish my original problem I would like that. What I am trying to do is create a bunch of <2gb archives and upload them to a storage bucket. The problem being where to store an archive before sending it off. I chose /mnt/user because I am pretty sure that is the only location that uses the hard disks