• Posts

  • Joined

  • Last visited

BloodBlight's Achievements


Newbie (1/14)



  1. I freed up some space, and DDing out in 100MB chunks. It let me write to the cache drive all the way down to 113MBs... But I also noticed this: This got my thinking that this could be a simple math error. So I adjusted the minimum cache size to 1.6 GBs (came to that with some math I can't remember) and Shazam! No more writes to cache drive even for 1k files. I ended up with 376 MB free when I SHOULD have no less than 1.6GBs. But I re-did my tests and was able to zero it out again... So IDK at this point...
  2. Interesting... So, SSHFS, no change. If I rsync files to the drive and cause the error, then do a DD to the share, it goes directly to the array. If I delete a single 400MB file, taking the free space up from 220MBs to 620MBs, the DD will go to the cache drive even though it has less than the 1GB minimum and errors out!
  3. Indeed! I will do a dd with oflag=direct. I THINK that works through NFS. Oh... Ya know, I might try an SSHFS mount and see if it does the same thing. Testing now...
  4. I don't think that it does. I am working under the assumption that at least one file will cause a policy violation before triggering the rule to store elsewhere. I picked these files and sizes intentionally. The 1GB minimum free size should be able to accommodate the largest file two times over (400MB). So one file takes it below the 1GB mark (by even 1 byte), the OS can start writing a second file (assuming the file that would break the rule is still flushing), but by the time it gets to the third file, the first should by flushed and trigger the rule to move it down to the next tear of storage. Does that line of thinking sound right? Sorry, I am very good at breaking things.
  5. For my process, I am deleting the folder, that frees up about 2GBs on the cache drive, then re-starting the copy. Is there some sort of delay on the free space calculation? This is not a multi threaded copy so..
  6. Understood. The largest file being copied is only about 400MBs, so that should not be an issue here. And right now, even with that set, it is sitting at 220MBs free: Even if you subtract the largest file in the copy (400MBs) from the 1GB, it should never go below 600MBs free... So it seems like something isn't working right.
  7. I think so, yes. Is this what you mean:
  8. Off lined array can grabbed a screenshot of the cache drive:
  9. I think I miss read something there the first time around (prepping to switch to unRAID from LizardFS). What I was hoping for was a way to remove a disk while maintaining parity. I think the clean drive and remove thing would do this. Sorry for burning time.