doma_2345

Members
  • Posts

    48
  • Joined

  • Last visited

Everything posted by doma_2345

  1. i didnt mention this in my original post, i ahve it set to '0' so my expectation is that it fills the cache drive right up, but that doesnt seem to happen it gets to 500gb which is what i have set here and then starts writing to the array leaving 500gb free on the cache drive (i have a 1tb cache drive)
  2. Ok, i didnt reailse that, but assuming I am only writing file of max 20gb it will leave around 500gb free? I have my cache drive set to leave 250mb free but presently it is leaving 500gb (approx) free. I have found where to file a big report so have done so.
  3. This was originally reported here and seemed to be fixed at some point meaning the cache drive could be fully utilized. However, when I upgraded to 6.10.x the issue/bug seems to be occurring again. I have my shares set to leave 500gb free on all my array drives, however this means that when my 1tb cache drive gets to 500gb utilization, any new copies start to write to the array, The old functionality used fill up the cache drive to 1tb and then start to write to the array. I cant see how this could have been implemented as a design feature by the fact that the functionality keeps changing and it also makes having large cache drives completely redundant if it only uses some of the available space. 35highfield-diagnostics-20220818-1538.zip
  4. This was originally reported here and seemed to be fixed at some point meaning the cache drive could be fully utilized. However, when I upgraded to 6.10.x the issue/bug seems to be occurring again. I have my shares set to leave 500gb free on all my array drives, however this means that when my 1tb cache drive gets to 500gb utilization, any new copies start to write to the array, The old functionality used fill up the cache drive to 1tb and then start to write to the array. Not sure how to report this as a bug and cant see how this could have been implemented as a design feature by the fact that the functionality keeps changing.
  5. I am looking into the usage / capacity of a pair of raid 1 btrfs pool ssd's I am trying to work out what is taking up the space but i seem to be unable to get correct usage stats The main gui reports the following DF -h report the following which largely matches the GUI DU -h -d 1 reports this, which seems off and the shares page reports this, which is also different there is nothing other than these shares on the drive How do i get the true file size, i believe the total usage is correct but the individual share sizes is wrong, how do i get a proper overview of the sizes of the shares?
  6. from looking into this du should be reporting a larger amount than btrfs reports not a smaller amount.
  7. actually i think i might have been reading it wrong, trying to work out the switch for file depth
  8. Hi All I wonder if someone could answer this I have a seperate pool set up /mnt/systempool The disk usage says the following These are two 1tb ssd's in a raid 1 configuration These are the system folders on this pool I thought it was strange the amount of spare capacity i had on this pool and when investigating found the following Why is the gui showing one thing and cli showing another?
  9. Although a pain and a long process, I can live with parity being rebuilt, at least I know now how to do it, (Tools > New Config) and my OCD can rest easy.
  10. I know this has probably been asked before, but I couldnt find a suitable answer. This is how my disks look currently and its killing my OCD as the disk sizes arent grouped together. I would like to group all the 12tb drives together as disks 5 - 8 Is the fix as simple as stopping the array and re-assigning the disks, or is this totally not the way to do it, is there even a way to do it? These disks are not empty so I dont want to reformat and lose the data, so need to avoid that scenario.
  11. everyone please be aware of this bug i lost my first two chia farmed becuase of it, make sure you config file is correct https://github.com/Chia-Network/chia-blockchain/issues/3141
  12. great to see this on the test version, much easier than checking the logs. Also much better than the response times I was getting on VM accessing the shares
  13. do you have nightly back ups of your appdata folder using CA Backup, the docker containers are stopped for this, i believe the default time is 3am
  14. on my summary page it shows total plots as 214. On the farming page it only shows 170 plots. I saw in an early post there was a fix coming for multiple plot directories in 0.2 I have version 0.2.1 installed was this fix included. I also see on your wiki the farming page shows time to win. This seems to be missing on my farming page?
  15. I think this is the same issue reported here
  16. apologies i have just checked and the minimum free space set on my cache is '0' so why would i want 500gb left unused.
  17. and i know my cache drive used to fill up as my vm's and docker cointainers would crash when there was no space left
  18. thats not obvious at all... the minimum free space set on my cache is 250mb and yet it is leaving 500gb free now. When it was set to 100gb it was leaving 250mb free in 6.8.X. Why would anyone want to leave 500gb free on a 1tb cache drive, it makes the cache drive pointless.
  19. it only seems to have been an issue when i have moved my minimum free space from 100gb per share to 500gb per share before that my cache drive used to fill up and then write to the array. I am not sure what has changed then but something has and this no longer works.
  20. I had this issue, recently, I am copying a series of 100gb files to my server and set the minimum free space on the share to 500gb (as i didnt want to completely fill my 12tb array disks) this meant that it would copy a couple of files to my 1tb cache disk (600gb useable) and then start copying directly to the array. I don't believe this is a desired behavior as if you have a 250gb cache disk it ignores this setting, it is only when the cache disk size is above the minimum free space that this becomes an issue. I also dont remember this being an issue in 6.8.x this only seems to be an issue in 6.9.2
  21. So I had this issue a while ago and now it has started again with no difference in my docker containers, in fact nothing changed to make it stop happening and now start again. What is curious is that it happens over night, I reboot the server in the morning and then within 10 / 20 mins it happens again. I reboot again and it is fine all day and then over night it happens again the the cycle restarts. I didn't grab the logs before the first reboot but find them attached before the second 35highfield-diagnostics-20210518-0943.zip EDIT: I am running 6.9.2
  22. so i rebooted again and disabled a docker container that had an error in the first set of diagnostics and has been installed for about the same amount of time as i have been having docker issues which may have been caused by this all along and it is now working. but it does seem to be docker containers causing this issue.