Helmonder

Members
  • Posts

    2788
  • Joined

  • Last visited

Everything posted by Helmonder

  1. Could have something to do with you allocation method combined with split level… Check out those settings and play around with them .. Set split level to “split any ditectory as required”, that might be it.. Verzonden vanaf mijn iPhone met Tapatalk
  2. That must be it, mine lock fine. Success ! Verzonden vanaf mijn iPhone met Tapatalk
  3. What do you mean by “mind of thr sfp+ dacs will lock in “ ? Think you mean “none” ? Did you put them the right way up ? Mine are locking great. Verzonden vanaf mijn iPhone met Tapatalk
  4. No errors found ! Verzonden vanaf mijn iPhone met Tapatalk
  5. Okay... but its not in there.. and the array is functioning fine, I do have an eth0 and it also survives reboots... What would be the best cause to get rid of this message ? Delete the file ? I do not think there is a way to "recreate" it ? Maybe I am thinking about this wrong... Is this a copy of the file I created myself ? And is FCP just checking every file ? The naming convention is how I would call a file before editting it for some reason.. Cannot remember it but could be true..
  6. Thanks for this ! My 10G is now eth2 so I should switch that with the eth0 1G then... Correct ?
  7. Installed the container but the userid/password settings do not seem to work ? I can set them but the WEBGUI will not let me log on.. The edit page states that you can leave them empty if you do not want userid/pass but the edit page will not allow them to be empty..
  8. I have a bond0 configured that consists out of 3 network interfaces in an active-backup setup: eth0 and eth1 are the regular nic's on my motherboard (1G) eth2 is a Mellanox SFP+ 10G connection to my switch Since this is an active-backup setup only one should be active, how is priority of the NIC's decided within a bond ? Does the system do this on its own ? Is active-backup the best bonding type I can choose ? Since my NIC's are unbalanced (one beiing 10G and two 1G) I think that mode-0 would not be beneficial since the 1G's would be used in some cases... Would mode 5 balance-tlb be something I could use ? Also: Can unavailability of a NIC be notified in some way ? Otherwise all nic's within the bond could gradually and unnoticed die off untill the last one dies and the connection is gone..
  9. The file old.network-rules.cfg is flagged as corrupted, it does not look corrupted to me.. I actually replaced my flashdrive after the notification (had a new one lying around and I had not taken the moment to do it), also after replacement a new scan gives the same error.. So its not corruption of the drive but something that is triggered on the content: # PCI device 0x8086:0x1533 (igb) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="ac:1f:6b:94:71:62", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth3 " # PCI device 0x8086:0x1533 (igb) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="ac:1f:6b:94:71:63", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth1 " # PCI device 0x15b3:0x6750 (mlx4_core) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:02:c9:52:e9:60", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth2 " # PCI device 0x15b3:0x6750 (mlx4_core) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:02:c9:52:e9:61", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth4 " For the moment I will not worry about the message. Unless someone sees something wrong here ?
  10. Addition: The SFP+ 10GB is now also in.. I should be set for five years.. At which point I might actually stop... I do notice that I am using more and more streaming services so the initial "goal" for my unraid server is kind of dying off.. I like keeping care of it though.. Like a nice pet 🙂
  11. Card is in and working ! I moved the M2 to an pci expension card and put the SFP+ in the slot.. Done !
  12. Yeah it was really super weird... Because this is the unraid forum a bit more context for the unraid people: I had 9 8TB drives in the array only allocated to chia plots. I added a new drive to that share, another 8TB one. Plotting refused to work.. I tried messing with the minimum space and allocation method to make sure that did not work out the wrong way (although I have been working with unraid for a long time and I do not think I would have made an error there). What worked in the end is removing all existing Chia drives out of the Chia share and keeping only the new drive, that works... That is not an issue also since the plots are still perfectly readable and the shares are not written to while farming..
  13. This is the output of the log when it is plotting that first plot: Multi-threaded pipelined Chia k32 plotter - efa5b2a (Sponsored by Flexpool.io - Check them out if you're looking for a secure and scalable Chia pool) Network Port: 8444 [chia] Final Directory: /plots/ Number of Plots: 1 Crafting plot 1 out of 1 (2022/02/08 18:24:20) Process ID: 185 Number of Threads: 4 Number of Buckets P1: 2^8 (256) Number of Buckets P3+P4: 2^8 (256) Pool Public Key: 98533421fb01cd3e93bb10c8a3c8a7d1e78d33d577f2a537def7292f89f9918cff12a0fe3ed818b71bfa2821513e9b27 Farmer Public Key: a5289fd7f3eb289bae43f58a2dec652369acaf004eb6179dc6fd77053e37d5be9713f3ee3fda01842eaed5c9184ea576 Working Directory: /plotting/ Working Directory 2: /plotting/ Plot Name: plot-k32-2022-02-08-18-24-8c67fb12aaac22d05d1a2958c6c0eafc261b6c8a3c4d814fb59c444bba4938a3 Verzonden vanaf mijn iPhone met Tapatalk
  14. Nope… it stopped again… one plot got made and transferred to /plots (proving that rights are fine and nothing is read only. Status is as described above. Verzonden vanaf mijn iPhone met Tapatalk
  15. It would… but that is why i checked writing on console within the docker… that worked so the share is not read-only or full.. I have reset security (change permissions) on the share just to be sure and will let know. Would be weird if it works though, a security should also prevent the first plot to be saved..
  16. Hiya ! I just joined the discord but sharing the log seems easier this way ? The most recent log ( 2022-02-07T17_40_10.437180+01_00.plot.log ) might point to the issue already, it is only 1KB big and contains one row: Failed to write to finaldir directory: '/plots/' /plots is mapped to /mnt/user/Chia Plots/ this is a user share without cache pool that has 6,20TB Free: /Plots is writeable, I opened the console for the docker and was able to write to the share: The file underneath is "plotman.log", this is 43,959 KB and is attached, the last few lines: Starting plot job: chia_plot -n 1 -k 32 -r 4 -u 256 -x 8444 -t /plotting/ -d /plots/ -v 256 -K 1 -f a5289fd7f3eb289bae43f58a2dec652369acaf004eb6179dc6fd77053e37d5be9713f3ee3fda01842eaed5c9184ea576 -p 98533421fb01cd3e93bb10c8a3c8a7d1e78d33d577f2a537def7292f89f9918cff12a0fe3ed818b71bfa2821513e9b27 ; logging to /root/.chia/plotman/logs/2022-02-07T17_38_10.286204+01_00.plot.log Starting plot job: chia_plot -n 1 -k 32 -r 4 -u 256 -x 8444 -t /plotting/ -d /plots/ -v 256 -K 1 -f a5289fd7f3eb289bae43f58a2dec652369acaf004eb6179dc6fd77053e37d5be9713f3ee3fda01842eaed5c9184ea576 -p 98533421fb01cd3e93bb10c8a3c8a7d1e78d33d577f2a537def7292f89f9918cff12a0fe3ed818b71bfa2821513e9b27 ; logging to /root/.chia/plotman/logs/2022-02-07T17_38_30.311255+01_00.plot.log Starting plot job: chia_plot -n 1 -k 32 -r 4 -u 256 -x 8444 -t /plotting/ -d /plots/ -v 256 -K 1 -f a5289fd7f3eb289bae43f58a2dec652369acaf004eb6179dc6fd77053e37d5be9713f3ee3fda01842eaed5c9184ea576 -p 98533421fb01cd3e93bb10c8a3c8a7d1e78d33d577f2a537def7292f89f9918cff12a0fe3ed818b71bfa2821513e9b27 ; logging to /root/.chia/plotman/logs/2022-02-07T17_38_50.335495+01_00.plot.log Starting plot job: chia_plot -n 1 -k 32 -r 4 -u 256 -x 8444 -t /plotting/ -d /plots/ -v 256 -K 1 -f a5289fd7f3eb289bae43f58a2dec652369acaf004eb6179dc6fd77053e37d5be9713f3ee3fda01842eaed5c9184ea576 -p 98533421fb01cd3e93bb10c8a3c8a7d1e78d33d577f2a537def7292f89f9918cff12a0fe3ed818b71bfa2821513e9b27 ; logging to /root/.chia/plotman/logs/2022-02-07T17_39_10.361398+01_00.plot.log Starting plot job: chia_plot -n 1 -k 32 -r 4 -u 256 -x 8444 -t /plotting/ -d /plots/ -v 256 -K 1 -f a5289fd7f3eb289bae43f58a2dec652369acaf004eb6179dc6fd77053e37d5be9713f3ee3fda01842eaed5c9184ea576 -p 98533421fb01cd3e93bb10c8a3c8a7d1e78d33d577f2a537def7292f89f9918cff12a0fe3ed818b71bfa2821513e9b27 ; logging to /root/.chia/plotman/logs/2022-02-07T17_39_30.386354+01_00.plot.log Starting plot job: chia_plot -n 1 -k 32 -r 4 -u 256 -x 8444 -t /plotting/ -d /plots/ -v 256 -K 1 -f a5289fd7f3eb289bae43f58a2dec652369acaf004eb6179dc6fd77053e37d5be9713f3ee3fda01842eaed5c9184ea576 -p 98533421fb01cd3e93bb10c8a3c8a7d1e78d33d577f2a537def7292f89f9918cff12a0fe3ed818b71bfa2821513e9b27 ; logging to /root/.chia/plotman/logs/2022-02-07T17_39_50.411756+01_00.plot.log This is how Macharis looks on the plotting page, stopping and starting the container does not work, it remains in this state: After restarting the container after 15 minutes or so the status changes to "stopped" and I am able to restart the plotter (but after one plot it will fail again) My CPU is running on 50% load (E-2146G CPU) and my memory is at 64% utilisation (64GB) Any idea ? plotman.zip
  17. Unfortunately the same effect: The first started plot has finished, machinaris shows the plotter as “running” but it is not… Pretty sure that if I start the start/stop cycle I can the plotter started again but again only for one plot. What can I send log-wise ? Or is there something else I can do. Verzonden vanaf mijn iPhone met Tapatalk
  18. Was afraid of that… luckily I still have an x1 port available and a card for that I can put the M2 on :-) will be a bit slower bu good ‘nuff .. Verzonden vanaf mijn iPhone met Tapatalk
  19. I was trying to find info on this… the pci lanes are shared…. Does that mean one will really be disabled or will they share resources ? Verzonden vanaf mijn iPhone met Tapatalk
  20. That solved it ! I set all containers to “latest”, then did a check on updates, installed those, started the containers, let them startup for 15 minutes and now I could start the plotter. I will keep an eye on it to see if the system keeps plotting ! Verzonden vanaf mijn iPhone met Tapatalk
  21. My plotting seems to not work consistent any more… I start the plotter, it makes a plot, does not start a new one, plotter is still running though… Stopping and restarting the plotter also does not help… I am running “test” .. if i switch to “latest” that gives an “internal server error”: I”nternal Server Error The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application” Verzonden vanaf mijn iPhone met Tapatalk
  22. That is great news ! Did you have to cross flash it in some way ? Or did it work right away ? Verzonden vanaf mijn iPhone met Tapatalk
  23. So my build was getting a bit old in the tooth, I used to have: Supermicro X11SSM-F - Xeon E3 1230 v6 - 32GB ECC - 3 * Dell PERC H200 PCI-e - 4 * 5in3: Supermicro CSE-M35T-1B. 114 TB spread over 13 disks - Cache: 1 * 960 GB M2 SSD in XFS - Chiapool 2 * 256GB SSD Parity: 2 * 10TB. Unraid PRO key. Network: 10G + 1GB + 1GB (in bond) Delivery of my new motherboard took -8 months- (thanks Covid), but now I am rocking a new setup that keeps my CPU at < 50% again (with all the Chia this had krept up to 100% utilisation and full memory, also triggering OOM errors. New setup: - SuperMicro X11SCA (lost the IPMI unfortunately, so hung an extra monitor on the wall for when its needed, works to) - Inten Xeon E-2146G CPU (bought that second hand last year finding out it would not work in my old motherboard) - Dark Rock Pro 4 CPU cooler (instead of the stock one) - 64 GB DDR4 Single-bit ECC (can expand to 128GB if I want) - Network is now bottlenecked at 1 GB connection because my Mellanox card needs an X8 slot that I do not have anymore, ordered a new X4 based one though, so should be fine when it arrives in a few weeks - I now have 3 cache pools: - "Cache", 960GB NVME M2: This remains my apps drive - "Chia", 1TB SATA SSD: This used to be my Chia plotting drive, not using it now, save it for a rainy day - "Temp", 2TB NVME M2: This is now by cache drive for files and also my chia plotting drive - Flashdrive I did not change that, I will need to change that out also before it fails at some point. I change out my drives when they get +3 years and when I add one more 16TB Toshiba I will have changed out all drives. My data lives on 5 16TB drives (so about 80TB). My actual array is 176 TB, everything above that 80TB is in Chia storage, I actually used all my previously exchanged SATA drives (older then 3 years but they still have life left in them) for Chia plots. I am solo plotting and this has allready earned me about EUR 500,=. System is full now though, there is no physical room for drives as well as no room in the unraid license left.
  24. I know.. I have the same thing. Although not peer to peer, I actually have a network switch with two SFP+ cages, one is connected to my server, the other one to my desktop.. Indeed great transfer speeds. However, I was using a Mellanox X8 card (two SFP+ cages but only using one), now I have switched to a new motherboard I do not have a spare X8 slot available but I do have an X4 slot available, which should be fine since a single cage SFP+ is good enough for me. There are however numerous different kind of Mellanox cards available in the second hand market and I was looking for one that is allready known to work with Unraid.. I now purchased this one on eBay: Mellanox MCX311A-XCAT ConnectX-3 10GbE Single-Port SFP+ PCIe3.0 x4 8GT/s Hope it will work, !
  25. Who can advice an X4 sfp+ mellanox card that will work in my unraid box ? Verzonden vanaf mijn iPhone met Tapatalk