macmanluke

Members
  • Posts

    23
  • Joined

  • Last visited

Everything posted by macmanluke

  1. finally searched this problem guess ill implement this fix be nice if it was fixed in an update
  2. haha just thought of that and came to post - i bet thats it will see how i go
  3. Does not seem to say anything useful, they just end without finishing Had 3 finish yesterday but it failed again last night (looks like right at the start of one last night) Interestingly both nights looks like it stopped just after 3am when i come back in the morning the web browser window has also disconnected and needs a refresh
  4. nah there was plenty of space basically empty 1TB SSD (and when it had failed it still had 550+GB free)
  5. So i set this up last night Started it plotting and was at around 170gb for 2 plots. This morning it had stopped plotting and no sign of the plots. Drive still has a bunch of plot files on it (still on the ssd) how can i find out what went wrong? guess i have to manually remove the files and cant resume? edit: Ran a single plot today and it seemed to complete ok, was also after sync had completed not sure if that made a difference. will just keep them running one at a time i guess for now
  6. Yea might be worth a play Believe it would be bound to VFIO (box checked in the VM?) Its a Nvidia 3060 and hashing pretty much same as it was in a dedicated box (48mh) Currently use quicksync for plex but the nvidia would be slightly better id guess Also no monitor connected, just a HDMI dummy Funny enough i searched for a docker before setting up the VM and came up with nothing so went the VM route
  7. hmm wonder if i should switch over to this from a windows VM. Windows is working flawless and resource usage is no issue but maybe save a few watts of power....
  8. how do you install over previous pihole instance (safely)? Will it keep configs?
  9. yea that does seem easier (maybe slightly more risky with redundancy but ill just have to get CAbackup working properly first)
  10. Hello Looking to upgrade shortly to 6.9 and do the SSD fix - cache ssd pool has written 190TB in a year... In the instructions i noticed it says use mover to transfer cache to array but onto a btrfs drive - is there any reason for this? my array is XFS If its required is there any way of converting a drive without messing up the array/parity etc? Currently have one drive thats empty (and excluded from shares globally) as i moved its contents off when i had a drive start failing recently (since replaced/rebuilt array) thanks!
  11. Just set this up and have a strange issue If i use lancache im getting sub 2MB/s downloads in battle.net launcher but if i bypass it i get my usual 15-20MB/s Any suggestions? quick test in steam does not seem to have the same issues thanks! edit: looks like its recommended to change the slice size for blizzard - where is the config files for this docker?
  12. anyone having issues with the log file growing for this docker?
  13. I wonder if the software trying to get WU very often is also causing issues as if i leave it running i get nothing but if i stop it overnight ill get a new WU straight away when restarted
  14. Installed it yesterday and it was working fine all day Today i noticed it was doing nothing Searching the logs i see: Its just repeating that, got it to do something restarting the docker but was soon doing nothing again Is there something wrong (or actually just nothing for it to do?
  15. Iv had issues since the last update, crashing/growing to 4gb! had to do a force upgrade to get it to work again, see how it goes.... edit: nup crashes almost straight after booting
  16. Just noticed i have same issue Tried -i br0 but still pegging 1 core
  17. Just noticed i have same issue Trying -i br0 now see how i go, wondered why i had so much cpu usage with nothing much happening Edit: still pegging one core...
  18. Although when i added it to my array its now clearing it again?
  19. my 3TB just finished and succeeded!
  20. just a 3TB but it errored last 2 tries
  21. that worked for me (as a test) will run the whole drive now
  22. I just got the same error pre clearing a Seagate ST3000DM001