• Posts

  • Joined

  • Last visited

Everything posted by Helmonder

  1. Question…. What when I win some chia… how do I move it to another waller (OKEx for example) so I might exchange it for BCT ? Verzonden vanaf mijn iPhone met Tapatalk
  2. Yeah… but tomorrow i expecting you will not be seeing the nee alerts… its now stuck on today.. Verzonden vanaf mijn iPhone met Tapatalk
  3. It looks like the "Alerts" page is stuck on a few days before.. The log is still beiing monitored though, I think I saw this a couple of days before also, when I restart the docker I think it will start with today again (and then get stuck on that). Do not want to try though, as it will stop my plotting Stopping and starting the monitoring job itself does not help.
  4. Just saw below in the logs.. It says “added coins” but nothing visible in the dashboard.. The log text does not mean chia was won hopefully ? Cause if so something is wrong.. Verzonden vanaf mijn iPhone met Tapatalk
  5. Hi, The status doea not update itself and even services stopping and starting the docker.. I will try the docker console command as soon as the issue arises again ! Verzonden vanaf mijn iPhone met Tapatalk
  6. Think I found why madmax was not working... Will try again -after- I have read the instructions :-) (EDIT: In the meantime MADMAX is now plotting) Still think something is going on with the plotting screen though... After starting chia plotting again I now: stopped plotting killed the running plot The files are still in the temp folder and the screen still shows the old plot:
  7. Hi, I wanted to try the MADMAX plotting, so I killed my current plots (only just started), removed all temp files, changed the config and am trying to start plotting again.. However the system does not start plotting and the plotting screen shows some confusing information... It says its plotting, but Tower is stopped (I only have 1 system) and the plots that I previouslyt killed are still in the list: Any idea? I switched back to the chia plotter, that works fine.
  8. Thanks for your answer but I think you are not understanding.. My question was that it is unclear to me -how- to determine if this is a specific docker or the unraid core.. If someone can give me a push in the right direction here that I am on my way again. Nessus is a vulnarability scanner and it scans a couple of high level vulnarabilities on my unraid server, so the question is not on nessus.. but on the system..
  9. I just installed the Nessus docker and let it run.. The following one I cannot identify.. It is also hard for me to tell if this is something in the unraid distribution or within a plugin.. Anyone any idea how I could go about finding that out ?
  10. Rebuild and repair done, all good ! Verzonden vanaf mijn iPhone met Tapatalk
  11. I know... the disk got thrown off the array though... Now doing a rebuild on itself and simultaneously also doing the repair...
  12. Any idea ? Jun 10 18:41:43 Tower kernel: XFS (md8): Corruption warning: Metadata has LSN (1:2006775) ahead of current LSN (1:2006631). Please unmount and run xfs_repair (>= v4.3) to resolve. Jun 10 18:41:43 Tower kernel: XFS (md8): Metadata corruption detected at xfs_agi_verify+0x63/0x12e [xfs], xfs_agi block 0x2 Jun 10 18:41:43 Tower kernel: XFS (md8): Unmount and run xfs_repair Jun 10 18:41:43 Tower kernel: XFS (md8): First 128 bytes of corrupted metadata buffer: Jun 10 18:41:43 Tower kernel: XFS (md8): metadata I/O error in "xfs_read_agi+0x7c/0xc8 [xfs]" at daddr 0x2 len 1 error 117 Jun 10 18:41:43 Tower kernel: XFS (md8): xfs_imap_lookup: xfs_ialloc_read_agi() returned error -117, agno 0 Jun 10 18:41:43 Tower kernel: XFS (md8): Failed to read root inode 0x60, error 117 Jun 10 18:41:43 Tower root: mount: /mnt/disk8: mount(2) system call failed: Structure needs cleaning. I am running a rebuild on this drive at the moment... Hope that will fix stuff ? Smart reports were fine so I am rebuilding on the same disk (after doing a complete cable reseat for all my disks just to be sure..) tower-diagnostics-20210610-1947.zip
  13. https://tweakers.net/aanbod/2736776/supermicro-x9scm-f-met-1230-v2-33ghz-en-16gb-ecc-memory.html Verzonden vanaf mijn iPhone met Tapatalk
  14. Think I found it allready, you are a life saver… I connected the cage fan directly to the motherboard… It appears i can connect the fan to the cage and the cage to the motherboard. With the wat I connected it right now the cage does not see the fan and alerts… It also explains why in reality everything worked fine :-) Will check for real this afternoon, but wuite positive this is the issue ! Verzonden vanaf mijn iPhone met Tapatalk
  15. CSE-M35T-1B Verzonden vanaf mijn iPhone met Tapatalk
  16. oeh… that is a good one… i was in contact with supermicro and they confirmed that the only beep like this is for overheating and that could not be the case here… did not think of the cage itself as a potential.. it has been in the server for a longer amount of time (though unused and unplugged for years).. i will check that this weekend !
  17. That is not likely since the fan on the cage is powered by the motherboard and that fan is also plugged in when the cage is not, and then everything works..
  18. I read that…. “System overheat condition” but as said that does seem very unlikely if that happens immediately after a powerup when 1 extra drivecage is connected… unplug it, power up, no beep, powerdown, plug in, power up, beep.. That cannot be an overheat condition..it -does- seem to be power related though… Verzonden vanaf mijn iPhone met Tapatalk
  19. Hi, This weekend I added another M1015 in my system to add another 5 disks. The M1015 was flashed fine and when I add one SSD to that adapter that adapter is recognised fine in unraid and that SSD is now part of my cache pool (this to confirm that adding the card works). Now the moment I give my drive cage power and startup my system my server starts beeping continuously (and immediately at boot). This long beep normally means something like an overheat but I do not think that is likely since it happens immediately at startup.. My setup: Supermicro X11SSM-F, Version 1.01 Intel® Xeon® CPU E3-1230 v6 @ 3.50GHz 32 GiB DDR Single-bit ECC 10GB Mellanox SFTP Network 10 * WD RED 8/10TB 1 * 1TB SSD (NVME, cache) 2 * 250 GB SSD (Extra cache pool (chia)) 4 * 8GB Seagate archive PSU is 750 single rail. Just to be sure I added the new drive cage with a new power connector directly from the PSU. My first thought would be a wattage shortage (considering it happens directly at bootup), but that does not seem likely with 750 watts.. I sat thru the beep for a few minutes actually starting up the system and everything did work, drives were recognised. IPMI also does not show errors (there is one error on a fan that is running to low but that has always been there). I am tempted to just disable the speaker but ofcourse that is a bad idea.. What could this be ?? Diagnostics is attached just to be sure.
  20. Solved it… unrelated issue… cache drive was full.. Verzonden vanaf mijn iPhone met Tapatalk
  21. Just had to reboot unraid…. After the reboot farming is unavailable and an error in connection… Been like that for an hour, can that still be a startup thing that will solve itself or is something else wrong ? Verzonden vanaf mijn iPhone met Tapatalk
  22. Hi ! Sofar I have created 7 plots and that is continuing nicely. I am wondering however: how can I actually determine that the -farming- is working ? I am aware that it can take months and months before there could be any payout, but I do not see any indication that there is something going that is “farming” (besides the “farming active” on the main page). How can I see that I am at least -trying- to create a profit ? Verzonden vanaf mijn iPhone met Tapatalk
  23. Soon (tm) is the answer you will get out of the forum ;-)
  24. Its still just sitting there…. Cache drive has more then enough free… 100g+ .. I removed the plots and restarted the process with 1 plot at a time.. I had been fiddling with the type of cache drive (Only, Prefered). I can imagine that had created some conflicting data.. Since this morning it is tugging along at its first plot and now sitting at 3:4 but still running.. So I am guessing all is fine now !