jollymonsa

Members
  • Posts

    48
  • Joined

  • Last visited

Everything posted by jollymonsa

  1. SPX labs is without a doubt one of the top organic search sites when I am looking for server knowhow. I have referred what feels like a hundred times to his work when someone gets a dell 12th generation server and then says "its loud", yeah he has the fix for that. https://www.spxlabs.com/blog/2019/3/16/silence-your-dell-poweredge-server < incase YOU need that fix. I didnt know he had a youtube channel until fairly recently but have subbed and enjoy it a lot. Keep up the good work man!
  2. Can I get a build link for a 6.8.3 compatible .txz @Squid Sorry for such a dumb request but I have to have the ability to use a PMC8003 card for a few months and the patches for 6.9+ even beta just do not allow the NetApp DS4246 to work with that card. 6.8.3 It is working great however.
  3. Or do this to keep from needing to restart the mining operation.
  4. This can be addressed by users with using the following script in a User Scripts job: chown nobody:users /mnt/user/XXX-YOU-SHARE-HERE-XXX/plot*.plot && chmod 755 /mnt/user/XXX-YOU-SHARE-HERE-XXX/plot*.plot Note part about using your User share name. And then set the cron time to custom and run it every 2 mins or so with the following schedule: */2 * * * * then you are good 2 go
  5. most of what people are running into are Docker related hardships that I have seen so far. Docker is a different mindset for how to run, map, and share resources but it is among the best once you get it up and some familiarity with it. I think you should rename the shares to not have spaces, periods, other non-alphanumeric characters and give it another try. The docker container will handle the map so you only need the friendly name "in container" mapped specified in your plotman.yaml Does that help? If you look about 10 questions back, Guy went over another person's mapping issues as well. Let me know if you have other pain points and I will try to make a video covering that setup.
  6. The only change you should make to the Machinaris container is entering the mode: plotter variable and hitting apply. That will keep all the settings you had working prior. Did you have these settings you posted above working prior? You shouldnt need to do anything other then verify that your settings (famer pk, pool pk, mnemonic.txt) all are exactly as they have been while you plotted prior. IF they are not, return them to those settings that are associated with the Hpool binding.
  7. Here is the relevant timestamp jump to the part about setting plotter mode in Machinaris for UNRAID so you can not double farm. I has a vlan and a trash laptop I put it all together on and run the client. Have checked firewall and seen no oddities. https://youtu.be/Udv7ISyQn8s?t=1030 This is all you need to set, and run their miner, and you are good to go. Keep using the same keys/nmemonic setup you have been in Machinaris, it binds to your keys not the other way around.
  8. I just did a video walkthru on setting up everything in UnRAID and Machinaris. https://www.youtube.com/embed/Udv7ISyQn8s Sidenote, got to run the Chiapos inclusion as it was a fresh pull for the video, is def yielding improvements in speed. DANG! Now I have to sacrifice phase 1 plots and update the rest of the rigs. Worth It.
  9. I am parallel mining (via terminal exec, not docker launched terminal) and started 6 with line: docker exec -it chia venv/bin/chia plots create -b 4300 -r 2 -t /plotting/plot2 -d /plots Have a 2TB ssd used only for mining. 32GB ram and the 2x 2630L v2 will be upgraded as soon as these 5 plots (hopefully) finish. Not sure why the 1 died, but will enable better log level once I move it all.
  10. HOLY COW! Thanks for the super clear, concise message about everything getting written at the very end to the array! I am sure it has been written and I probably have even read it, but the way you said it finally registered The CPU is also doing a docker for ZoneMinder and that can be intensive, but the real problem is I am running 2x 2630L v2 processors (to reduce heat) instead of the 2x 2650v2 processors I have here I think. Once these 5 remaining plots get written to the array, I will be adding 64gb ram and putting the faster processors back in also. Gonna have to move the sc486 back to the garage however, this room is like a sauna.
  11. Im at 16+ hours, on 5 of the remaining active plottings (1 died?), and they are at: "Starting phase 2/4: Backpropagation into tmp files... Tue May 11 16:19:22 2021" I just have no insight into what is happening here, but I can relate this is how long phase 1 took: "Time for phase 1 = 60548.690 seconds." Something is wrong, but I have no idea what. The chia official gives pretty over the top ideas to look for that include things like restarting chia to change the log level, which would tank the existing plotting, from my understanding.
  12. NOT TO BLAME PIXEL, Just super confusing directions for a novice, including from chia official sources. I have had a lot of trouble with getting plots to actually show up, super frustrating! I am abandoning this template and just setting up a F'n VM and mining off that. The time-curve of production is working AGAINST anyone not able to get this working ASAP rather hard. I have completed at least 1 table 4 sort and then have that harvester just drop out. when I do a farm summary, I am greeted with 0 plots, but I think I need the gui to start with as I have no idea if things are still operating. I cant spend more time trying to figure out if its the docker, my config, or whatever when I should have just setup a dedicated VM and mined off that. I think I leave this running, and just see what I can cobble together for a dedicated machine would be the best bet.
  13. My issues may also be your issues, CHIA is currently out of sync, some sort of a hotpatch is in the works as I type that will address this issue. My issues may be related.
  14. Something is not right, unsure of what. This has been running for 20 hours already. Initiated with cmd: venv/bin/chia plots create -b 5000 -r 2 -n 6 -t /plotting/plot1 -d /plots # venv/bin/chia farm summary Connection error. Check if farmer is running at 8559 Farming status: Syncing Total chia farmed: 0.0 User transaction fees: 0.0 Block rewards: 0.0 Last height farmed: 0 Plot count: 0 Total size of plots: 0.000 GiB Estimated network space: 2613.697 PiB Expected time to win: Unknown Note: log into your key using 'chia wallet show' to see rewards for each key venv/bin/chia plots check 2021-05-09T05:03:23.573 chia.plotting.check_plots : INFO Loading plots in config.yaml using plot_tools loading code 2021-05-09T05:03:25.399 chia.plotting.plot_tools : INFO Searching directories ['/plots'] 2021-05-09T05:03:25.400 chia.plotting.plot_tools : INFO Loaded a total of 0 plots of size 0.0 TiB, in 0.06633210182189941 seconds 2021-05-09T05:03:25.400 chia.plotting.check_plots : INFO 2021-05-09T05:03:25.401 chia.plotting.check_plots : INFO 2021-05-09T05:03:25.401 chia.plotting.check_plots : INFO Summary 2021-05-09T05:03:25.401 chia.plotting.check_plots : INFO Found 0 valid plots, total size 0.00000 TiB # venv/bin/chia plots check 2021-05-09T15:54:52.213 chia.plotting.check_plots : INFO Loading plots in config.yaml using plot_tools loading code 2021-05-09T15:54:53.957 chia.plotting.plot_tools : INFO Searching directories ['/plots'] 2021-05-09T15:54:54.222 chia.plotting.plot_tools : INFO Found plot /plots/plot-k32-2021-05-08-22-14-6073b11d5b1d4d655748135c36076ba121ae6ed1d81f38e863215d67c527fd8e.plot of size 32 2021-05-09T15:54:54.222 chia.plotting.plot_tools : INFO Loaded a total of 1 plots of size 0.09893322847165109 TiB, in 0.33187055587768555 seconds 2021-05-09T15:54:54.223 chia.plotting.check_plots : INFO 2021-05-09T15:54:54.223 chia.plotting.check_plots : INFO 2021-05-09T15:54:54.223 chia.plotting.check_plots : INFO Starting to test each plot with 30 challenges each 2021-05-09T15:54:54.223 chia.plotting.check_plots : INFO Testing plot /plots/plot-k32-2021-05-08-22-14-XXXXXXXXXXXXXXXXXXXXXXXXXX863215d67c527fd8e.plot k=32 2021-05-09T15:54:54.223 chia.plotting.check_plots : INFO Pool public key: 8f3622aa023885006XXXXXXXXXXXXXXXXXXXXXXXXXXdb03c4ca83cc5f3d5 2021-05-09T15:54:54.246 chia.plotting.check_plots : INFO Farmer public key: aa4835ad9ede103df8172d83c93XXXXXXXXXXXXXXXXXXXXXXXXXX1db92e70d8e251 2021-05-09T15:54:54.247 chia.plotting.check_plots : INFO Local sk: <PrivateKey 11ab05946346ca5262aXXXXXXXXXXXXXXXXXXXXXXXXXX26e4af89a> 2021-05-09T15:55:04.819 chia.plotting.check_plots : INFO Proofs 23 / 30, 0.7667 2021-05-09T15:55:04.820 chia.plotting.check_plots : INFO 2021-05-09T15:55:04.820 chia.plotting.check_plots : INFO 2021-05-09T15:55:04.820 chia.plotting.check_plots : INFO Summary 2021-05-09T15:55:04.820 chia.plotting.check_plots : INFO Found 1 valid plots, total size 0.09893 TiB 2021-05-09T15:55:04.820 chia.plotting.check_plots : INFO 1 plots of size 32
  15. Does this startup line look right for a server with 2tb ssd in plotting, 14tb in plots, 12 free threads, 16 free gigs ram? venv/bin/chia plots create -b 5000 -r 2 -n 6 -t /plotting/plot1 -d /plots Also thanks for the awesome container, such easy.
  16. Just wanted to chime in for anyone else getting the message "Community Applications requires your server to have internet access. The most common cause of this failure is a failure to resolve DNS addresses. You can try and reset your modem and router to fix this issue, or set static DNS addresses (Settings - Network Settings) of 208.67.222.222 and 208.67.220.220 and try again." You should check your network.cfg and also your boot log to see if you have an "extra" eth device being detected. I had an extra "eth1 cannot be found" message that was the linchpin of what was my issue. I edited the /boot/config/network.cfg entries and removed all [1] entries for eth1 (which doesnt exist) and changed SYSNICS="2" to SYSNICS="1" and shutdown and restarted. Network is back and all is functioning again. Message is gone. No further issues. Of note, editing and saving from the frontend network settings page did not resolve this. Some errata: Version: 6.9.0-rc2 Single 10g nic Nic is a VIRTIO (however this is because I virtualize Unraid under proxmox) device, however its back to working just fine as a virtio device. I HAD BEEN CHANGING THINGS recently, manually, so this is most likely some userland issue. No I do not remember what I was editing at a system level, I blackout when I enter the cmd line and return only after whatever it is I am doing is done. Not sure if Unraid/slackware has any logic to check network.cfg for nonsense, but it was an issue of my own creating. Not sure if its a bug/defect.
  17. Looks like it is still not supported in the most current RC. Could benefit from improved client side caching especially.
  18. I wanted to get into a bit of a exploration of the reasoning behind the virtualization of an unRAID server and see what kind of use cases fit into this mix well, and which ones fall outside a good use. I have a simple ecosystem at home that looks like this. An Unraid on a SC846 (24 bay LFF 3.5) and proxmox on a T620 (32 bay SFF 2.5). Simple GBE and 10gb links without a fancy router like Sophos XG/PFsense (this however is on the project to-do list asap) and a UPS on each. The first use case I am interested in the nice live migration features of Proxmox for VM's. One machine is much more useful for loading up SSD's and running VM's (t620) and one is geared more toward being a large filer (SC846). Indeed this is how I have them running now. I have ~15 VM's running on the t620 and currently use the 846 for Docker and have ~8 running on the SSD cache. Due to the need to passthru the host flag on proxmox to run unraid and the fact that I cannot move drives with ease, this seems to be one-way migration off the t620 and onto the sc846. Both machines have performed very well and not needed resets so I am just not sure this is a great use case fit. I do have a backup system for both, a tape library, so I do not consider either of these machines to have that functionality. In writing this I think I have talked myself out of it.
  19. This was what it was for me. I had just been downloading some ISO's directly with wget and still had the terminal active in the disk.
  20. Should I use the logs for that? I tried to manually specify the settings for 4k, but it still hit the transcoder.
  21. Yes it is 4K oled 55EF9500. Running version .40 software
  22. I did see this in the DNLA client settings. .131 is the LG. Is this what you might mean? IP=192.168.1.131,DirectPlay=true,DirectStream=true,LocalResolution=1920x1080,RemoteResolution=1920x1080,OnlineResolution=1920x1080,LocalVideoQuality=99,RemoteVideoQuality=99,OnlineVideoQuality=99,SubtitleSize=60,AudioBoost=100,MusicBitrate=320;IP=192.168.1.103,DirectPlay=true,DirectStream=true,LocalResolution=1920x1080,RemoteResolution=1920x1080,OnlineResolution=1920x1080,LocalVideoQuality=99,RemoteVideoQuality=99,OnlineVideoQuality=99,SubtitleSize=60,AudioBoost=100,MusicBitrate=320
  23. Yea I tried to change this in the client (lg webos2) but it does still kickoff the transcode when I start the stream and watch TOP from cmd line. All hardwired. I did figure out that the processors cannot handle realtime transcode as well unfortunately, close, but not fast enough by about 20% it seems. If I upgrade the processors up to the fastest models for the old opteron platform, a jump from 2.1ghz to 2.8 ghz, it may or may not be enough. I would love to be able to ekk out 4k on the current server, but I think its just too old. Time for new hardware.