cjizzle

Members
  • Posts

    29
  • Joined

  • Last visited

Everything posted by cjizzle

  1. OK thanks. Will upload in the morning. 330am here and too tired to go any further atm. Thank you again.
  2. I did that from iDrac but not sure how to get the file to upload
  3. Not sure how to post. I have no access to the files other than the java screen in iDrac
  4. Same here. Had to access gui from iDrac for my Dell r720. Bond0 shows as down and there is no network interface. Help! Previous version 6.10.1 was fine. Did a cold reboot multiple times and no luck.
  5. Pointless. The chain is dead right now. They are going to release a new client and restart the mainnet, date TBD. You can read all about it on their discord.
  6. Just an fyi, same exact error with Flora this morning. I saw the report from another user was for hdd as was my original.
  7. /usr/local/bin/fd-cli nft-recover -l 1fe... -p xch1h.. -nh 127.0.0.1 -np 28559 -ct /root/.hddcoin/mainnet/config/ssl/full_node/private_full_node.crt -ck /root/.hddcoin/mainnet/config/ssl/full_node/private_full_node.key --> Executed at: 20211031-021142 I replaced the rest of the addresses with dots in the above paste, but they are the correct launcher and pool contract addresses.
  8. Not sure if this helps, but Chives cannot use the same plots as Chia and most other Chia forks, it only supports k29, k30 and k31. Chia and most forks by default use k32 or higher. Granted the plot sizes are smaller for Chives, but you would need to dedicate space just for making Chives plots that arent compatible with many other forks. For me anways, this makes Chives a no go, since I cant use what I already have.
  9. Thanks for the info! I checked the hddcoin fork log and it threw the following error: An error occurred while sending the recovery transaction. Traceback (most recent call last): File "/usr/local/bin/fd-cli", line 33, in <module> sys.exit(load_entry_point('fd-cli', 'console_scripts', 'fd-cli')()) File "/fd-cli/fd_cli/fd_cli.py", line 220, in main fd_cli() File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 829, in __call__ return self.main(*args, **kwargs) File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 782, in main rv = self.invoke(ctx) File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 610, in invoke return callback(*args, **kwargs) File "/usr/local/lib/python3.9/dist-packages/click/decorators.py", line 21, in new_func return f(get_current_context(), *args, **kwargs) File "/fd-cli/fd_cli/fd_cli.py", line 193, in fd_cli_nft_recover fd_cli_cmd_nft_recover( File "/fd-cli/fd_cli/fd_cli_cmd_nft_recover.py", line 211, in fd_cli_cmd_nft_recover fd_cli_print_raw(exception, pre=pre) File "/fd-cli/fd_cli/fd_cli_print.py", line 15, in fd_cli_print_raw print(f'{" " * pre * 4}{value:{fill}s}') TypeError: unsupported format string passed to HTTPError.__format__ It shows I have 1.75 HDD not claimed yet.
  10. This is probably a dumb question, but is this in the gui somewhere? Awesome feature, can't seem to find it. Is it automatic?
  11. HI All, Running a Win 10 VM on a Dell R720 with MD3200 storage array attached. I need Windows to see some unraid shares as directly attached storage vs smb. I've been wading thru a ton of posts similar to this but have yet to figure it out, so I guess I will make one more. The reason for all of this is that Im trying to mine some alt chia coin in my VM and smb is causing response time to be anywhere from 5 to 25 seconds on average, some even longer. Im currently running Machinaris in a docker and avg response is under 1 second. It kind of like trying to get from an upstairs bedroom down to the living room in a house. Instead of walking down the stairs, Windows 10 is making me go out the upstairs bedroom window, drive downtown, go a few blocks left then a few right then park next to the old burned down barn 3 miles away and walk from there to the front door of the house. Just like Bing maps tells you..
  12. Have you already installed the 2 TB disk? If so you can use midnight commander to copy the data over. Open terminal and type mc The interface is relatively straightforward. Copy your data over, then read this: https://wiki.unraid.net/Shrink_array
  13. The smb password cannot be the same as the Ubuntu login password. The name can be the same though. Assuming your Ubuntu username is ullibelgie type this in console (on your ubuntu machine) and it will prompt you to set the password (this does not affect your actual Ubuntu login credentials, only the smb shares): sudo smbpasswd -U ullibelgie After doing this restart the smb service: sudo service smbd restart Now delete your unassigned device share in Unraid and create another one with the new password.
  14. Hi all! I need to set Ubuntu Playground with a max bandwidth limit. Is there a variable or something that can be put in a docker's settings to limit that? Or is that something that needs to be put in as a command immediately after starting Ubuntu Playground? My linux knowledge is very minimal but willing to learn and research as much as possible. Can't seem to find too much or Im looking in the wrong place. Thanks!
  15. Sorry was away from PC and though maybe something quick and silly was the solution. Using a WD elements usb interface from a shucked drive. Was able to format it in Unraid to XFS and currently copying files to it. Originally is was a blank xfs drive aside from a single folder called plots8 that was empty. Originally formatted it in Ubuntu 20.04. Sorry I have no diagnostics or screenshot of the original error. All it showed under Unassigned Devices when I plugged it in was the Format button and that it was an unrecognized file system. Yes I do have the Unassigned Devices Plus add on as well. I have 6 more of these drives that eventually I need to do the same thing with. Would you please enlighten me as to where specifically to look for a log or any other info that may help?
  16. I pulled an XFS formatted drive from an Ubuntu machine and Unassigned Devices doesnt seem to recognize the format. Any ideas why?
  17. Thanks. Yes everything is set correctly per your instructions (I believe lol) and no duplicate variable. plots_dir is a variable /plots:/plots1:/plots2:/plots3:/plots4:/plotspool All the smb shares are mapped in the container settings with the network location In the config.yaml plot_directories: - /plotspool - /plots - /plots1 - /plots2 - /plots3 - /plots4 Right now I see Plots Passed Filter as 7/1251 .77290 seconds Appx 400 of those plots are on the Windows machine, the rest on the Unraid server itself. Last Summary reported 836 plots Also the XFX proof was found appx 10 hours ago. I did check and all wallets and blockchains are synced. Just realized I didn't port forward 6888 for flax, so maybe I went past the 30 second limit. 2 proofs missed looking back. Guess we will see if that fixes the flax issue.
  18. Couple questions, maybe I missed it somewhere.. I have appx 900 plots using pool contract address and 400 old school...been replotting. The old school plots are on a Windows machine and pool plots on Unraid server using Machinaris to farm. Found out my windows machine was taking over a minute to verify proofs so it was pointless. Anyways I set up Machinaris to access the windows shares, it recognized them fine and still less than a second access time. When I got the daily summary though it still showed only the pool plots even though in the Summary tab it shows all plots being scanned. Does this mean Machinaris will only do pools OR solo, not both? Also found my first XFX proof since the above change but no XFX coin was "delivered". Not like its worth anything right now but still... Any thoughts would be appreciated!
  19. Very excited to use Machinaris for mad max plotting, used it before when parallel plotting. Only thing I am waiting for is the -v variable being implemented (maybe it has been I just haven't seen it) In the meantime I've been using Ubuntu Playground. I am assuming most of the steps are (or will be) the same so here it goes: Real quick system specs: Dell R720, 2x Xeon 12/24, 128GB ram ddr3 1866, 1TB cache nvme, 2TB Chia nvme To create ramdrive: On boot before launching playground or mach or whatever open terminal and use this (plotram is my ramdrive name): mkdir -p /tmp/plotram && mount -t tmpfs -o size=115g tmpfs /tmp/plotram Edit container and add path: ramdrive /plotram /tmp/plotram/ Make sure your paths are set up correctly for chia specific nvme plotting, final plot location, etc. Profit! In Ubuntu Playground without -v variable I was getting 30-33 minute plots (512 buckets default). Now with -v, using 32 threads, 256 buckets for Ph 1+2 and 128 buckets for Ph 3+4 I'm getting 27 minute plots consistently. Here's what I copy and paste into ubuntu playground: ./build/chia_plot -t /plotting1/ -2 /plotram/ -d /plots/ -k 32 -u 256 -r 32 -v 128 -n 300 -f FARMERKEY -p POOLKEY 128GB is plenty in Unraid as long as you dont have a bunch of other crap running. Been plotting away with no errors for days now. Also (in Ubuntu anyways) theres no wait after a plot is done to start a new one, the finished plot transfers to final location while the next plot is churning away, typically finishes somewhere later in Phase 1. I may have missed it, but does this new Machinaris behave the same? And any -v possibilities?
  20. Turns out the battery was bad in the primary Controller 0 on the MD3200, so it locked it out leaving Controller 1 to do all the work even though it wasn't the "owner" of the virtual disk. Replaced the battery in 0 and then made controller 1 the owner and now have sustained 200+MB/s transfers using MC (same transfer as above, Disk3 to Disk9). Im a dolt. Hardware...check the friggin hardware...
  21. Thats the exact syntax I used in mc and the avg speed was in the mid 40MB/s range. nohup mv /mnt/disk3/plots/* /mnt/disk9/plotsraid & Very strange behavior.
  22. Correct, Unraid share to Unraid share using Windows 10 (not VM) on a hardline 1GB connection. For example, Disk3/plots/ to Disk9/plotsraid/ Disk3 being a 6TB SAS on the r720 and Disk9 being the 120TB on the MD3200 (12x6 6TB SAS in Raid0).
  23. Thanks for the reply. Since I have it, looks like Im stuck with it for now so have to make the best of it. Question about transfer speed though... Using Mover, Krusader or MC I get short bursts of speed for a few seconds up to 150MB/s but avg speed transferring from a drive in the r720 to the md3200 runs around 40-50MB/s (all drives are Seagate Enterprise 6TB 7200rpm SAS). We aren't talking a bunch of small files, just a single 100GB file. You would think in a raid0 config for 12 drives it would be pretty darn fast, especially since the file I'm transferring is on an m.2 nvme drive. When transferring using my Windows machine (copying from share to share) I get sustained speed of above 150MB/s for the duration of the entire transfer. Any ideas why using unraid itself for the transfer would be so slow? EDIT: Should have added info about cabling... From 9200 - 8e to MD3200 -- port 0 to controller 0 port 0, port 1 to controller 1 port 0.