flex

Members
  • Posts

    5
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

flex's Achievements

Noob

Noob (1/14)

0

Reputation

  1. That was actually the first thing I checked. After switching to ZFS and having less docker containers running. I ran an iotop and now the main culprit is nginx: worker process writing/reading a few GBS since I started the command
  2. Was checking my ssd smart data and noticed that a brand new drive I just installed had 303TB written to it. I've checked online and found that this was an old issue which should have been fixed in 6.9. I'm currently running 6.12.4. looking at iotop, the only out of the ordinary writes I see are from [btfrs-transcations]. Would like assistance in preventing the death of ssd in a few more months at this rate. I've seen spikes of writes go up to 500MB/s. I am running plex but transcoding is done in RAM. I thought it was excessive logs by nginx, I moved that to a different cache drive but no change. Can provide any logs/data as needed
  3. You assume that the drive is empty. it is not, moreso I would prefer not to have unnecessary wear on a main drive. Even though it is mirrored, the less stress it goes through the better. I'm not rejecting your comment but it's a bit like having a leaky faucet in which you have a bucket under that you empty ever so often and instead of fixing the leak you just put a bigger container so you have to empty it out less often.
  4. So instead of filling up the docker img its now going to fill up my SSD. Doesnt seem like a proper solution. What I would like to know is what kind of data is even being written here where it's writing over 15GB within a few minutes and why you cannot change where the data is written to.
  5. Running the latest version of NPM. The container is writing to the docker img location. It's writing/deleting as it fluctuates, at times filling with docker img completely. It is not the logs writing to the img. Whatever its writing clears the space upon the container stopping. It is writing to /etc/hosts as shown below. shfs 932G 644G 281G 70% /data /dev/loop2 20G 16G 3.6G 82% /etc/hosts