cjizzle

Members
  • Posts

    15
  • Joined

  • Last visited

cjizzle's Achievements

Newbie

Newbie (1/14)

1

Reputation

  1. The smb password cannot be the same as the Ubuntu login password. The name can be the same though. Assuming your Ubuntu username is ullibelgie type this in console (on your ubuntu machine) and it will prompt you to set the password (this does not affect your actual Ubuntu login credentials, only the smb shares): sudo smbpasswd -U ullibelgie After doing this restart the smb service: sudo service smbd restart Now delete your unassigned device share in Unraid and create another one with the new password.
  2. Hi all! I need to set Ubuntu Playground with a max bandwidth limit. Is there a variable or something that can be put in a docker's settings to limit that? Or is that something that needs to be put in as a command immediately after starting Ubuntu Playground? My linux knowledge is very minimal but willing to learn and research as much as possible. Can't seem to find too much or Im looking in the wrong place. Thanks!
  3. Sorry was away from PC and though maybe something quick and silly was the solution. Using a WD elements usb interface from a shucked drive. Was able to format it in Unraid to XFS and currently copying files to it. Originally is was a blank xfs drive aside from a single folder called plots8 that was empty. Originally formatted it in Ubuntu 20.04. Sorry I have no diagnostics or screenshot of the original error. All it showed under Unassigned Devices when I plugged it in was the Format button and that it was an unrecognized file system. Yes I do have the Unassigned Devices Plus add on as well. I have 6 more of these drives that eventually I need to do the same thing with. Would you please enlighten me as to where specifically to look for a log or any other info that may help?
  4. I pulled an XFS formatted drive from an Ubuntu machine and Unassigned Devices doesnt seem to recognize the format. Any ideas why?
  5. Thanks. Yes everything is set correctly per your instructions (I believe lol) and no duplicate variable. plots_dir is a variable /plots:/plots1:/plots2:/plots3:/plots4:/plotspool All the smb shares are mapped in the container settings with the network location In the config.yaml plot_directories: - /plotspool - /plots - /plots1 - /plots2 - /plots3 - /plots4 Right now I see Plots Passed Filter as 7/1251 .77290 seconds Appx 400 of those plots are on the Windows machine, the rest on the Unraid server itself. Last Summary reported 836 plots Also the XFX proof was found appx 10 hours ago. I did check and all wallets and blockchains are synced. Just realized I didn't port forward 6888 for flax, so maybe I went past the 30 second limit. 2 proofs missed looking back. Guess we will see if that fixes the flax issue.
  6. Couple questions, maybe I missed it somewhere.. I have appx 900 plots using pool contract address and 400 old school...been replotting. The old school plots are on a Windows machine and pool plots on Unraid server using Machinaris to farm. Found out my windows machine was taking over a minute to verify proofs so it was pointless. Anyways I set up Machinaris to access the windows shares, it recognized them fine and still less than a second access time. When I got the daily summary though it still showed only the pool plots even though in the Summary tab it shows all plots being scanned. Does this mean Machinaris will only do pools OR solo, not both? Also found my first XFX proof since the above change but no XFX coin was "delivered". Not like its worth anything right now but still... Any thoughts would be appreciated!
  7. Very excited to use Machinaris for mad max plotting, used it before when parallel plotting. Only thing I am waiting for is the -v variable being implemented (maybe it has been I just haven't seen it) In the meantime I've been using Ubuntu Playground. I am assuming most of the steps are (or will be) the same so here it goes: Real quick system specs: Dell R720, 2x Xeon 12/24, 128GB ram ddr3 1866, 1TB cache nvme, 2TB Chia nvme To create ramdrive: On boot before launching playground or mach or whatever open terminal and use this (plotram is my ramdrive name): mkdir -p /tmp/plotram && mount -t tmpfs -o size=115g tmpfs /tmp/plotram Edit container and add path: ramdrive /plotram /tmp/plotram/ Make sure your paths are set up correctly for chia specific nvme plotting, final plot location, etc. Profit! In Ubuntu Playground without -v variable I was getting 30-33 minute plots (512 buckets default). Now with -v, using 32 threads, 256 buckets for Ph 1+2 and 128 buckets for Ph 3+4 I'm getting 27 minute plots consistently. Here's what I copy and paste into ubuntu playground: ./build/chia_plot -t /plotting1/ -2 /plotram/ -d /plots/ -k 32 -u 256 -r 32 -v 128 -n 300 -f FARMERKEY -p POOLKEY 128GB is plenty in Unraid as long as you dont have a bunch of other crap running. Been plotting away with no errors for days now. Also (in Ubuntu anyways) theres no wait after a plot is done to start a new one, the finished plot transfers to final location while the next plot is churning away, typically finishes somewhere later in Phase 1. I may have missed it, but does this new Machinaris behave the same? And any -v possibilities?
  8. Turns out the battery was bad in the primary Controller 0 on the MD3200, so it locked it out leaving Controller 1 to do all the work even though it wasn't the "owner" of the virtual disk. Replaced the battery in 0 and then made controller 1 the owner and now have sustained 200+MB/s transfers using MC (same transfer as above, Disk3 to Disk9). Im a dolt. Hardware...check the friggin hardware...
  9. Thats the exact syntax I used in mc and the avg speed was in the mid 40MB/s range. nohup mv /mnt/disk3/plots/* /mnt/disk9/plotsraid & Very strange behavior.
  10. Correct, Unraid share to Unraid share using Windows 10 (not VM) on a hardline 1GB connection. For example, Disk3/plots/ to Disk9/plotsraid/ Disk3 being a 6TB SAS on the r720 and Disk9 being the 120TB on the MD3200 (12x6 6TB SAS in Raid0).
  11. Thanks for the reply. Since I have it, looks like Im stuck with it for now so have to make the best of it. Question about transfer speed though... Using Mover, Krusader or MC I get short bursts of speed for a few seconds up to 150MB/s but avg speed transferring from a drive in the r720 to the md3200 runs around 40-50MB/s (all drives are Seagate Enterprise 6TB 7200rpm SAS). We aren't talking a bunch of small files, just a single 100GB file. You would think in a raid0 config for 12 drives it would be pretty darn fast, especially since the file I'm transferring is on an m.2 nvme drive. When transferring using my Windows machine (copying from share to share) I get sustained speed of above 150MB/s for the duration of the entire transfer. Any ideas why using unraid itself for the transfer would be so slow? EDIT: Should have added info about cabling... From 9200 - 8e to MD3200 -- port 0 to controller 0 port 0, port 1 to controller 1 port 0.
  12. Sorry for being such a noob, but not sure how to input the above script into the container gui. Could you please verify this is correct? Click Edit, then Add another Path... Config Type: Path Name: plotting2 Container Path: /plotting2 Host Path: /mnt/chiatwo/plotting2 Default Value: (blank) Access Mode: Read/Write Description: (blank) Then hit Add then Apply. The container restarts. Then in plotman.yaml change the value to tmp2: /plotting2 Does that all look correct? Also, by chance is it possible to pause existing jobs, do the above changes, then still have them show up to resume after the restart? Thanks again!
  13. Have a question about the tmp2 directory in the plotman.yaml file. Should that be left as the actual path to the folder or do I need to set up a new path in the docker itself, or a combination of the two? Setup: Cache - 1TB Nvme Chia - 2TB Nvme (/plotting) - set up in Docker Chiatwo - 2TB Nvme (/plotting2) Reason I ask is because I tried it with no added docker path (in yaml I put tmp2: /mnt/chiatwo/plotting2), and I started getting the error of docker img full, with nothing happening on Chiatwo in Phase 3. When I tried to add a path to the docker called tmp2 and applied, it threw an "Invalid Mount Path" error and nuked the entire Machinaris docker, as in it disappeared entirely from Unraid. Upon reinstalling it just plain wouldn't load, solution was to delete the entire folder from appdata and reinstall again. FYI using in plotter mode only. Thanks for any replies!
  14. Currently running Unraid 6.9.2 on a Dell R720 with an H310 flashed to LSI 9211 IT. Runs perfectly and all drives are recognized. Also have an LSI 9200-8e IT card installed in the r720, connected to a poweredge MD3200 with 12 6tb sas drives installed. Unraid doesnt see them, and it appears the BIOS doesnt either. When I hit ctrl-c to get into the LSI Config Utility it shows two Direct Attached Devices, both Dell MD32xx 0820, but when I hit enter on those, nothing happens. When connecting to the MD3200 via ethernet and a Windows PC using the Dell Modular Disk Storage Manager it sees the drives just fine, 65.490TB unconfigured in total and each individual drive. What config step am I missing here? Edit: So I finally just created a Raid0 config using the Dell MDSM and Unraid sees it fine. I just didnt want to go that route, I got unraid for a reason... Any ideas would be appreciated.
  15. Hardware info: I just ordered a Dell R720 and MD3200 with LSI cards so I can use Unraid on it. This config gives me 20 3.5" SAS HDDs. I am adding 3 Nvme cache drives (on m.2 x16 cards) for a total of 23 drives. I also have a spare Lenovo TS140 and pcie sata expansion card which gives me 7 sata ports total. I've read and heard a little about using a persistent Linux install on a flash drive to boot the TS140 and a docker called pfsense to potentially act as the router for the TS140, but haven't really had the time yet to dive into the details of using each. My question is, is there some way to add the 7 Lenovo TS140 sata drives to the array to max it at the 30 drives allowed? Many thanks for any answers and/or ideas! -cjizzle