• Content Count

  • Joined

  • Last visited

Community Reputation

63 Good

About TexasUnraid

  • Rank

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Makes sense. Looks like I have disabled most of the docker logs at this point, got to give it a day to see the results to see what effect it has on writes. Then I will start disabling healthchecks and see what improvement that has.
  2. True, I have a few drives like that as well but I also tend to notice that around the ~40-50k hour mark is where I have started to see issues in the past. Makes sense when you consider that 42k hours matches a 5 year warranty.
  3. lol, I spent an hour last night trying to get everything going again after adding some new drives and restarting the system. Kept not wanting to add the new drives and then one would fail and I would have to stop all of them and restart the process. That said should have the final plots for my old drives finished in the next hour. I have 3x 14tb drives sitting here but just not sure I want to put a bunch of hours on them needlessly since these will be holding my important data once I get around to swapping drives in my main server.
  4. lol, didn't even think about the plots being part of the price.
  5. Yeah, I could see the benefits to it, there must be a reason it has not been though besides performance though. If there is money to be made someone will make it.
  6. I could see a benefit to that, although I am pretty sure the reason they don't is that the total density of the storage would not be as good due to more wasted space around the platters. Also much harder to fit 5.25 bays in a server. Still, I would be interested in one if the price/TB was significantly better then 3.5" drives.
  7. Indeed it would be but it was just a test to see what is possible. Interestingly, while the syslog-ng does indeed work and log properly the more interesting option for this situation is that by simply inputting a null ip address the containers do not error out and they also do not seem to be storing logs anywhere. The syslog-ng is shutdown and there is no json.log files in the container folders. I restarted the docker notify logging and sure enough, no more log activity dealing with those log files, vastly reduced the number of entries in total. The nice thi
  8. I should have all my old drives full later today, with 3 systems plotting I can do around ~10TB/day, although didn't bother getting the other 2 running up to this point. Now the debate is do I use my new drives to hold plots for a bit until I actually put them to use or leave them alone. I should have around ~36tb by the end of the day. I could add in another ~60TB if I put my backup drives into use but don't think it is worth it. With the price crashing not even looking like I will get my $100 back I spent on stuff lol. Feel sorry for the guys that actually bought new
  9. lol, even in that kind of bulk, still $17/tb at the starting bid. Sad, you used to be able to get those drives regularly for $15/tb. I got a few for less then $14/tb last black Friday.
  10. I tried the log-driver none option before but it broke the containers for some reason. I could not understand how the buffer option worked, the best I could understand it, it doesn't actually buffer the log into memory first and then write it out when it is full, this would be great. The way I understood it is that it is simply a buffer for when the logging driver is overloaded so that it can catch back up. Am I wrong in that understanding? I considered syslog but didn't want it filling my unraid log. I know very little about this stuff but I fou
  11. Yeah, I thought that is what you were talking about. Sadly that won't help in this case. The easiest option is to move these log files to a /tmp/ folder and use a symlink but I can't get it to work for some reason. If I move the log to a lower directory a symlink works fine but as soon as I try to move it outside of the normal directory tree, it fails. When looking at the logs of the failure, it stacks the paths one after the other: /var/lib/docker/containers/112811851ab079fcf45f423e95abc48e466a34f8468c917c7bdd1686d22f6ab4/tmp/test/112811851ab079
  12. Please explain, are you talking about mapping a file inside the container? The container is not aware of this log, it is the docker system itself that maintains the log. After research it seems it is a log of all the output from the container. My thinking was I could re-direct the log files to a tmp folder with a sumlink but docker complains with this: error can not open /var/lib/docker/containers/112811851ab079fcf45f423e95abc48e466a34f8468c917c7bdd1686d22f6ab4/tmp/test/112811851ab079fcf45f423e95abc48e466a34f8468c917c7bdd1686d22f6ab4-json.log: no such file or dir
  13. I know, I am not saying the log files are unraids problem. I am simply asking if there is a universal way to deal with it.
  14. Odd, I think that one synced pretty quick for me, flax is the one that took a long time for me.
  15. I know in theory that you can use the direct path with the folder but the log sat empty for hours when I did that. Switched to the /var folder and the log was populated as expected. Can't explain it but thats what happened. Good option for situations where you can control the logs, I have no idea what these json.log files are though, they are part of the docker system and I could not find any command to change the path they are written to. You can add set the log type to none but for some reason it breaks the container. It is stored in the root folder of the container,