• Posts

  • Joined

  • Last visited

Everything posted by doma_2345

  1. Although a pain and a long process, I can live with parity being rebuilt, at least I know now how to do it, (Tools > New Config) and my OCD can rest easy.
  2. I know this has probably been asked before, but I couldnt find a suitable answer. This is how my disks look currently and its killing my OCD as the disk sizes arent grouped together. I would like to group all the 12tb drives together as disks 5 - 8 Is the fix as simple as stopping the array and re-assigning the disks, or is this totally not the way to do it, is there even a way to do it? These disks are not empty so I dont want to reformat and lose the data, so need to avoid that scenario.
  3. everyone please be aware of this bug i lost my first two chia farmed becuase of it, make sure you config file is correct
  4. great to see this on the test version, much easier than checking the logs. Also much better than the response times I was getting on VM accessing the shares
  5. do you have nightly back ups of your appdata folder using CA Backup, the docker containers are stopped for this, i believe the default time is 3am
  6. on my summary page it shows total plots as 214. On the farming page it only shows 170 plots. I saw in an early post there was a fix coming for multiple plot directories in 0.2 I have version 0.2.1 installed was this fix included. I also see on your wiki the farming page shows time to win. This seems to be missing on my farming page?
  7. I think this is the same issue reported here
  8. apologies i have just checked and the minimum free space set on my cache is '0' so why would i want 500gb left unused.
  9. and i know my cache drive used to fill up as my vm's and docker cointainers would crash when there was no space left
  10. thats not obvious at all... the minimum free space set on my cache is 250mb and yet it is leaving 500gb free now. When it was set to 100gb it was leaving 250mb free in 6.8.X. Why would anyone want to leave 500gb free on a 1tb cache drive, it makes the cache drive pointless.
  11. it only seems to have been an issue when i have moved my minimum free space from 100gb per share to 500gb per share before that my cache drive used to fill up and then write to the array. I am not sure what has changed then but something has and this no longer works.
  12. I had this issue, recently, I am copying a series of 100gb files to my server and set the minimum free space on the share to 500gb (as i didnt want to completely fill my 12tb array disks) this meant that it would copy a couple of files to my 1tb cache disk (600gb useable) and then start copying directly to the array. I don't believe this is a desired behavior as if you have a 250gb cache disk it ignores this setting, it is only when the cache disk size is above the minimum free space that this becomes an issue. I also dont remember this being an issue in 6.8.x this only seems to be an issue in 6.9.2
  13. So I had this issue a while ago and now it has started again with no difference in my docker containers, in fact nothing changed to make it stop happening and now start again. What is curious is that it happens over night, I reboot the server in the morning and then within 10 / 20 mins it happens again. I reboot again and it is fine all day and then over night it happens again the the cycle restarts. I didn't grab the logs before the first reboot but find them attached before the second EDIT: I am running 6.9.2
  14. so i rebooted again and disabled a docker container that had an error in the first set of diagnostics and has been installed for about the same amount of time as i have been having docker issues which may have been caused by this all along and it is now working. but it does seem to be docker containers causing this issue.
  15. this error still shows Apr 15 16:45:12 35Highfield shfs: shfs: ../lib/fuse.c:1451: unlink_node: Assertion `node->nlookup > 1' failed. what do i do now?
  16. So i have rebooted and my shares are still missing
  17. good to know, but also FFS. if I reboot how does that affect the parity rebuild will it start from where it left off or will it start again?
  18. I had an issue with one disk going off line with no SMART errors it just became disabled, in trying to fix that and swapping cables around I ended up with two disabled disks. I started a parity rebuild of those two disk, apparently there is no way to just add them back into the array once they are disabled. Then this afternoon my docker service crashed, when all my shares disappeared. My docker service has been crashing fairly regularly since installing 6.9.0 Browsing the disks the folders are there with data in but the shares are missing But because the shares are missing all access to these files has stopped. I don't want to reboot the server as this will interrupt the parity rebuild. Any help would be appreciated, diagnostics attached.
  19. i tired swapping cables to see if that was an issue and ended up with two disks disabled. doh.... i dont see any issue with the disk in the SMART test so am just going to rebuild and see if it happens again. thanks for the help.
  20. I have this i believe this is from before i rebooted
  21. What do you mean its happening now, its still occurring. I just rebooted to see if it fixes the issue, it didnt.
  22. I will check all the connections when i get to the office, but the server doesn’t move so I dont think it could be a connection and I have never had a cable just die before. The hard drives are connected in sets of four to a sas back plain, so if it was a cable i would expect to lose four disks not just 1. Please find diagnostic data attached
  23. Hi I have had a disk die tonight, its being emulated succesfully, but when i look at the SMART data for the disk I am not sure what I am looking at, but I am trying to work out the reason for it dying. Was i just unlucky or is there something I am missing in the data?
  24. I have just set this up on a p4000 can anyone explain to me why the current and reported hashrates are so different, is this because of dev and pool fees?