Ystebad

Members
  • Posts

    158
  • Joined

  • Last visited

Everything posted by Ystebad

  1. My first use case is to install and run the normal Chia desktop client. AFAIK it’s not at all GPU intensive, so an emulated GPU I assume would be fine, but I thought I couldn’t install windows without a REAL GPU. I guess maybe I was mistaken? Eventually I want to get a nice GPU for windows pass through but they are impossible to come by right now.
  2. I only have one GPU and it’s used by docker containers. I would very much like to have either windows or Linux VM spun up and be able to be accessed remotely over LAN (Remote Desktop). Is this possible without GPU? I don’t think it’s possible for windows, but what about Linux? I know Linux server can do it, but I would like the desktop version as I need help getting Chia running and would like the GUI available. Thank you!
  3. I guess I'm giving up on this and installing a windows VM and running the actual chia client. I've spent weeks trying to get this to work have almost 1000 plots and not a single dime of chia to show for it. Pool won't support me as they say machinaris in unsupported client so as much as I hate windows, I'm going that way. Hope this continues, maybe someday I'll try again.
  4. @DaPeddaYes I had done that already: pool_contract_address: xch1z4xzuuea0yakj8wudayzh9t3kny4pxhkk3at7z7m7073wuxungqsczhkqv
  5. If anyone can help me with getting pool plots to work I'd greatly appreciate it. As far as I know I'm connected to flex pool.io pool. I've been mining pool plots for close to two weeks. yet I am seeing nothing back and when I go to check on my address on their website (using the wallet found after Payout instructions (pool will pay to this address)) it says "Specified address was not found in our system. Try waiting some time if you are already mining." I figure some of you must be using Machinaris to mine to flex pool so greatly appreciate any help getting this working. I've spent a huge amount of money on chia with literally NOTHING to show for it after months of farming and I'm extremely frustrated. My info: ------ # chia plotnft show Wallet height: 665859 Sync status: Synced Wallet id 2: Current state: FARMING_TO_POOL Current state from block height: 567495 Launcher ID: 8e700e562052fbe7daeaaedb9238f343c63053036f8d2aaf154ec2fcc78c91eb Target address (not for plotting): xch1d00purr0n5ae8hz706rcwge90m09w00wa4v78d9fpawgdhs6p0fsjt6rd8 Owner public key: 84017b3c91752f9a53313cc716b29fe9b45f922610cfca2065f5a9e10f30bbdc4a96d6b8e5d99341e175b1db902a0220 P2 singleton address (pool contract address for plotting): xch1dkcnya3m22la6q5625xu0ra0r7wwnj9aeqffay24xa26yktg89xqkgsgrc Current pool URL: https://xch-us-east.flexpool.io Current difficulty: 1 Points balance: 9999 Relative lock height: 100 blocks Payout instructions (pool will pay to this address): xch1z4xzuuea0yakj8wudayzh9t3kny4pxhkk3at7z7m7073wuxungqsczhkqv Wallet id 3: Current state: FARMING_TO_POOL Current state from block height: 567420 Launcher ID: f2b4ae1b6f1dcdcd8d0e4046447ae576f4a5b85fa8f0f12ceae270bb02b6bd1b Target address (not for plotting): xch1d00purr0n5ae8hz706rcwge90m09w00wa4v78d9fpawgdhs6p0fsjt6rd8 Owner public key: a3f36abcba4f297329890129b8996fc5491f69362c1719dd3a6ceed9a3ec656cbdd1c2664bac1e6335dce187c03a948a P2 singleton address (pool contract address for plotting): xch1lhzxecj0x29qr8gmzjklqe00z8za0nfhyhpmj8xekkf365qectjqz5q3xa Current pool URL: https://xch-us-east.flexpool.io Current difficulty: 1 Points balance: 9999 Relative lock height: 100 blocks Payout instructions (pool will pay to this address): xch13d9s8vv96h9px2wczjuxqxl88uvgqg3qzc7609pqnqsyyxyveeqqheks2s
  6. AH! I was trying to do everything from within the machinaris configuration settings from the webUI! Thank you so very much.
  7. Don't see this anywhere in my config. I assume this would go in the farming section. Sorry to be dense, but unclear on the syntax: So insert this into my farming yaml?: plots_dir: key=plots_dir value=/plots:/chiapoolplots (will this also keep farming my old plots? I thought this is what plot_directories: under harvester was setting).
  8. I am still struggling with getting online with pool plotting. I started plotting to a different directory for my pool plots so I can keep them separated. Original plots were saved to /plots and new ones are going to /chiapoolplots My /chiapoolplots share has files in it (14 in there currently) so the plotter is working and the share is mapped correctly.. Under farming tab none of these plots is showing up (only older plots) Under settings/farming I have: plot_directories: - /plots - /chiapoolplots What am I missing?
  9. @nimaim So I've managed via user script to set power limit for my 3080 upon array startup. However if using this container will allow more control that would be nice - but If I understand you correctly I have to run it once for each card and then stop it/them and then startup the actual mining docker after that? Do you automate this? Seems clunky, but if it works.... I need to add a second card to my server and it's different than the first one, so I'm leery of how to get power settings running for each card correctly.
  10. Interesting - my plotman.yaml didn't have anything listed at all for the pool_contract_address even though I'm up to date. It did not have the comments about removing either. I commented out pool_pk and added manually pool_contract_address. chia plotnft show gives me Wallet ID 2 and wallet ID 3. I am also plotting flax so I guess that's what ID3 is? I selected the address after "payout instructions" and pasted this to follow the pool_contract_address. I tried leaving it blank to auto fill as you suggested but it would not save, gave me error
  11. So I need some help with Pools. I started plotting what I thought were portable pool plots as soon as we were able. I set them up under "self pool". I've since joined flexpool but I don't seem to get any results from them and my ID is not showing in their system. My plots when I check them list my address as pool key which is what I thought the self-pool would set up as, but I understood that if I joined a pool it would transfer those plots over. However Flexpool help said "If you have a pool public key from the 'chia plots check' or gui, then it is not a portable plot and cannot be used to farm with a pool. It's likely what's called an OG plot which was made before Chia Network 1.2.0 software, or was made incorrectly with 1.2.0 or later software. Do any of your plots show up the way I displayed earlier, with *no* Pool public key?? What am I missing here? In plot config what should I have following: pool_pk:
  12. I really miss raid6. When this happened I would just yank a drive replace it and it was back to normal. during an unraid parity rebuild on a dual parity array let’s say a second drive fails - all my data would still be correct if I added back two drives and rebuilt right? I assume that the parity drives don’t stop doing their job during a rebuild, especially given it takes about 2 -days- for my array to do the parity.
  13. I have a drive that is throwing multiple errors and probably needs replaced. I also had included in the array but now want to replace it with an unassigned device because it will be used solely for CCTV recording ala spaceinvader video recommendations (not slowing array, no excess wear on parity disks given constant writes). Since I have dual parity on my system, can I just remove the drive and rebuild parity? I'm not really sure, but it seems WAY to complicated to do such a simple task as remove a drive. Unraid really should have a checkbox on a drive saying 'remove this from array' I've seen some guides and FAQ but they all talk about no parity as soon as you remove a drive, but since I have dual parity I'm not sure that is true in this case. Appreciate any help.
  14. Ok will try. Wish there was a way to be able to restart without losing plots - I stop plotting but it takes hours before I'm able to restart. Thank you.
  15. I waited until Flax ledger was updated and I'm now fully synced, but alerts log is full of Flax harvester appears to be offline messages. Is there something special that I have to do further to farm my plots on the Flax chain?
  16. Apparently Chia version supporting pools is now officially released - Any idea on timeline for support? I stopped my plotting so I've got a few hours before they ones remaining are done, so take your time 😁
  17. Hopefully someone can help because I'm about to go mad. I have a Plex server up and running on a NUC on my network. I have added a new Plex installed docker on my unraid server (different physical machine, different IP). When I open up the container from the docker tab in unraid, even though It's a brand new install it shows my my users the only thing I can do is select one and then I appear to be logged into my NUC instance of Plex. How can I get to the NEW unraid docker version of Plex and set it up and start using it for access. I've tried putting in the unraid server address directly (10.0.0.11:32400) and I get the same behavior. Maddening!!!
  18. I am running trex miner in a docker to mine Eth with my Nvidia 3080 GPU. I run the following commands manually from an Unraid console instance to reduce power from almost 400 watts down to 225: nvidia-smi -pm 1 nvidia-smi -i 0 -pl 227 Is there a way for these two commands to run automatically upon restart of the array? If I reboot or stop/start the array and forget to do it manually then it's running REALLY hot and wasting a bunch of electricity. advTHANKSance
  19. Mine says 64Gb and then down below says 62. Not sure why yours is so low.
  20. Downloaded new version and trying to use the new madmax. I think I changed my config for plotting as per recommendation but each time I try to start it it says "starting" then "plotman started successfully" but no cores being used and nothing showing up in plotting as ongoing and it says "idle" at the top. My config: # Learn more about Plotman at https://github.com/ericaltendorf/plotman # https://github.com/ericaltendorf/plotman/wiki/Configuration#versions version: [1] logging: # DO NOT CHANGE THIS IN-CONTAINER PATH USED BY MACHINARIS! plots: /root/.chia/plotman/logs user_interface: use_stty_size: False commands: interactive: autostart_plotting: False autostart_archiving: False # Where to plot and log. directories: # One or more directories to use as tmp dirs for plotting. The # scheduler will use all of them and distribute jobs among them. # It assumes that IO is independent for each one (i.e., that each # one is on a different physical device). # # If multiple directories share a common prefix, reports will # abbreviate and show just the uniquely identifying suffix. # REMEMBER ALL PATHS ARE IN-CONTAINER, THEY MAP TO HOST PATHS tmp: - /madmaxdisk9 # Optional: Allows overriding some characteristics of certain tmp # directories. This contains a map of tmp directory names to # attributes. If a tmp directory and attribute is not listed here, # it uses the default attribute setting from the main configuration. # # Currently support override parameters: # - tmpdir_max_jobs #tmp_overrides: # In this example, /plotting3 is larger than the other tmp # dirs and it can hold more plots than the default. #/plotting3: # tmpdir_max_jobs: 5 # Optional: tmp2 directory. If specified, will be passed to # the chia and madmax plotters as the '-2' param. #tmp2: /plotting2 # One or more directories; the scheduler will use all of them. # These again are presumed to be on independent physical devices, # so writes (plot jobs) and reads (archivals) can be scheduled # to minimize IO contention. # REMEMBER ALL PATHS ARE IN-CONTAINER, THEY MAP TO HOST PATHS dst: - /plots # See: https://github.com/guydavis/machinaris/wiki/Plotman#archiving #archiving: #target: rsyncd #env: #site_root: /mnt/disks #user: root #host: aragorn #rsync_port: 12000 #site: disks # Plotting scheduling parameters scheduling: # Run a job on a particular temp dir only if the number of existing jobs # before tmpdir_stagger_phase_major tmpdir_stagger_phase_minor # is less than tmpdir_stagger_phase_limit. # Phase major corresponds to the plot phase, phase minor corresponds to # the table or table pair in sequence, phase limit corresponds to # the number of plots allowed before [phase major, phase minor] tmpdir_stagger_phase_major: 4 tmpdir_stagger_phase_minor: 0 # Optional: default is 1 tmpdir_stagger_phase_limit: 1 # Don't run more than this many jobs at a time on a single temp dir. # Increase for staggered plotting by chia, leave at 1 for madmax sequential plotting tmpdir_max_jobs: 1 # Don't run more than this many jobs at a time in total. # Increase for staggered plotting by chia, leave at 1 for madmax sequential plotting global_max_jobs: 1 # Don't run any jobs (across all temp dirs) more often than this, in minutes. global_stagger_m: 30 # How often the daemon wakes to consider starting a new plot job, in seconds. polling_time_s: 20 # Configure the plotter. See: https://github.com/guydavis/machinaris/wiki/Plotman#plotting plotting: farmer_pk: yes this is my number here pool_pk: another private number here # If you enable 'chia', plot in *parallel* with higher tmpdir_max_jobs and global_max_jobs # If you enable 'madmax', plot in *sequence* with very low tmpdir_max_jobs and global_max_jobs type: madmax # The chia plotter: https://github.com/Chia-Network/chia-blockchain # chia: # k: 32 # k-size of plot, leave at 32 most of the time # e: False # Disable bitfield back sorting (default is True) # n_threads: 2 # Threads per job # n_buckets: 128 # Number of buckets to split data into # job_buffer: 3389 # Per job memory # The madmax plotter: https://github.com/madMAx43v3r/chia-plotter madmax: n_threads: 4 # Default is 4, crank up if you have many cores n_buckets: 256 # Default is 256 PLOTTING LOGS: Autoscroll ...sleeping 20 s: (False, 'stagger (62s/1800s)') ...sleeping 20 s: (False, 'stagger (83s/1800s)') ...sleeping 20 s: (False, 'stagger (103s/1800s)') ...sleeping 20 s: (False, 'stagger (124s/1800s)') ...sleeping 20 s: (False, 'stagger (145s/1800s)') ...sleeping 20 s: (False, 'stagger (166s/1800s)') ...sleeping 20 s: (False, 'stagger (186s/1800s)') ...sleeping 20 s: (False, 'stagger (207s/1800s)') ...sleeping 20 s: (False, 'stagger (228s/1800s)') ...sleeping 20 s: (False, 'stagger (249s/1800s)') ...sleeping 20 s: (False, 'stagger (269s/1800s)') ...sleeping 20 s: (False, 'stagger (290s/1800s)') ...sleeping 20 s: (False, 'stagger (311s/1800s)') ...sleeping 20 s: (False, 'stagger (331s/1800s)') ...sleeping 20 s: (False, 'stagger (352s/1800s)') ...sleeping 20 s: (False, 'stagger (373s/1800s)') ...sleeping 20 s: (False, 'stagger (393s/1800s)') ...sleeping 20 s: (False, 'stagger (414s/1800s)') ...sleeping 20 s: (False, 'stagger (435s/1800s)') ...sleeping 20 s: (False, 'stagger (456s/1800s)') ...sleeping 20 s: (False, 'stagger (477s/1800s)') ...sleeping 20 s: (False, 'stagger (497s/1800s)') ...sleeping 20 s: (False, 'stagger (518s/1800s)') ...sleeping 20 s: (False, 'stagger (539s/1800s)') ...sleeping 20 s: (False, 'stagger (560s/1800s)') ...sleeping 20 s: (False, 'stagger (580s/1800s)') ...sleeping 20 s: (False, 'stagger (601s/1800s)') ...sleeping 20 s: (False, 'stagger (622s/1800s)') ...sleeping 20 s: (False, 'stagger (643s/1800s)') ...sleeping 20 s: (False, 'stagger (664s/1800s)') ...sleeping 20 s: (False, 'stagger (684s/1800s)') ...sleeping 20 s: (False, 'stagger (705s/1800s)') ...sleeping 20 s: (False, 'stagger (726s/1800s)') ...sleeping 20 s: (False, 'stagger (747s/1800s)') ...sleeping 20 s: (False, 'stagger (768s/1800s)') ...sleeping 20 s: (False, 'stagger (788s/1800s)') ...sleeping 20 s: (False, 'stagger (809s/1800s)') ...sleeping 20 s: (False, 'stagger (830s/1800s)') ...sleeping 20 s: (False, 'stagger (851s/1800s)') ...sleeping 20 s: (False, 'stagger (872s/1800s)') ...sleeping 20 s: (False, 'stagger (892s/1800s)') ...sleeping 20 s: (False, 'stagger (913s/1800s)') ...sleeping 20 s: (False, 'stagger (934s/1800s)') ...sleeping 20 s: (False, 'stagger (957s/1800s)') ...sleeping 20 s: (False, 'stagger (978s/1800s)') ...sleeping 20 s: (False, 'stagger (998s/1800s)') ...sleeping 20 s: (False, 'stagger (1019s/1800s)') ...sleeping 20 s: (False, 'stagger (1040s/1800s)') ...sleeping 20 s: (False, 'stagger (1061s/1800s)') ...sleeping 20 s: (False, 'stagger (1081s/1800s)') ...sleeping 20 s: (False, 'stagger (1102s/1800s)') ...sleeping 20 s: (False, 'stagger (1122s/1800s)') ...sleeping 20 s: (False, 'stagger (1143s/1800s)') ...sleeping 20 s: (False, 'stagger (1164s/1800s)') ...sleeping 20 s: (False, 'stagger (1185s/1800s)') ...sleeping 20 s: (False, 'stagger (1206s/1800s)') ...sleeping 20 s: (False, 'stagger (1226s/1800s)') ...sleeping 20 s: (False, 'stagger (1247s/1800s)') ...sleeping 20 s: (False, 'stagger (1268s/1800s)') ...sleeping 20 s: (False, 'stagger (1288s/1800s)') ...sleeping 20 s: (False, 'stagger (1311s/1800s)') ...sleeping 20 s: (False, 'stagger (1333s/1800s)') ...sleeping 20 s: (False, 'stagger (1354s/1800s)') ...sleeping 20 s: (False, 'stagger (1376s/1800s)') ...sleeping 20 s: (False, 'stagger (1397s/1800s)') ...sleeping 20 s: (False, 'stagger (1419s/1800s)') ...sleeping 20 s: (False, 'stagger (1440s/1800s)') ...sleeping 20 s: (False, 'stagger (1460s/1800s)') ...sleeping 20 s: (False, 'stagger (1481s/1800s)') ...sleeping 20 s: (False, 'stagger (1502s/1800s)') ...sleeping 20 s: (False, 'stagger (1523s/1800s)') ...sleeping 20 s: (False, 'stagger (1543s/1800s)') ...sleeping 20 s: (False, 'stagger (1564s/1800s)') ...sleeping 20 s: (False, 'stagger (1585s/1800s)') ...sleeping 20 s: (False, 'stagger (1606s/1800s)') ...sleeping 20 s: (False, 'stagger (1626s/1800s)') ...sleeping 20 s: (False, 'stagger (1647s/1800s)') ...sleeping 20 s: (False, 'stagger (1668s/1800s)') ...sleeping 20 s: (False, 'stagger (1689s/1800s)') ...sleeping 20 s: (False, 'stagger (1710s/1800s)') ...sleeping 20 s: (False, 'stagger (1730s/1800s)') ...sleeping 20 s: (False, 'stagger (1751s/1800s)') ...sleeping 20 s: (False, 'stagger (1772s/1800s)') ...sleeping 20 s: (False, 'stagger (1792s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (1813s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (1834s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (1855s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (1876s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (1897s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (1917s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (1938s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (1959s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (1980s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2001s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2022s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2043s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2063s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2084s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2105s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2126s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2147s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2168s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2189s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2209s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2230s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2251s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2272s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2293s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2314s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2334s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2356s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2377s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2397s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2418s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2439s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2460s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2481s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2502s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2522s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2543s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2564s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2585s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2605s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2626s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2648s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2671s/1800s)') ...sleeping 20 s: (False, 'no eligible tempdirs (2691s/1800s)') ...sleeping 20 s: (False, 'no eligible tempdirs (2712s/1800s)') ...sleeping 20 s: (False, 'no eligible tempdirs (2733s/1800s)') ...sleeping 20 s: (False, 'no eligible tempdirs (2756s/1800s)') ...sleeping 20 s: (False, 'no eligible tempdirs (2780s/1800s)') ...sleeping 20 s: (False, 'no eligible tempdirs (2801s/1800s)') ...sleeping 20 s: (False, 'no eligible tempdirs (2821s/1800s)') ...sleeping 20 s: (False, 'no eligible tempdirs (2843s/1800s)') ...sleeping 20 s: (False, 'no eligible tempdirs (2864s/1800s)') ...sleeping 20 s: (False, 'no eligible tempdirs (2889s/1800s)') ...sleeping 20 s: (False, 'no eligible tempdirs (2910s/1800s)') ...sleeping 20 s: (False, 'no eligible tempdirs (2934s/1800s)') ...sleeping 20 s: (False, 'no eligible tempdirs (2958s/1800s)') ...sleeping 20 s: (True, 'Starting plot job: chia plots create -k 32 -r 6 -u 128 -b 3389 -t /plotting2 -d /plots ; logging to /root/.chia/plotman/logs/2021-06-25T09_15_46.524968-04_00.log') ...sleeping 20 s: (False, 'stagger (21s/1800s)') ...sleeping 20 s: (False, 'stagger (42s/1800s)') ...sleeping 20 s: (False, 'stagger (62s/1800s)') ...sleeping 20 s: (False, 'stagger (83s/1800s)') ...sleeping 20 s: (False, 'stagger (104s/1800s)') ...sleeping 20 s: (False, 'stagger (125s/1800s)') ...sleeping 20 s: (False, 'stagger (145s/1800s)') ...sleeping 20 s: (False, 'stagger (166s/1800s)') ...sleeping 20 s: (False, 'stagger (187s/1800s)') ...sleeping 20 s: (False, 'stagger (208s/1800s)') ...sleeping 20 s: (False, 'stagger (229s/1800s)') ...sleeping 20 s: (False, 'stagger (250s/1800s)') ...sleeping 20 s: (False, 'stagger (271s/1800s)') ...sleeping 20 s: (False, 'stagger (292s/1800s)') ...sleeping 20 s: (False, 'stagger (312s/1800s)') ...sleeping 20 s: (False, 'stagger (333s/1800s)') ...sleeping 20 s: (False, 'stagger (354s/1800s)') ...sleeping 20 s: (False, 'stagger (375s/1800s)') ...sleeping 20 s: (False, 'stagger (395s/1800s)') ...sleeping 20 s: (False, 'stagger (416s/1800s)') ...sleeping 20 s: (False, 'stagger (437s/1800s)') ...sleeping 20 s: (False, 'stagger (458s/1800s)') ...sleeping 20 s: (False, 'stagger (478s/1800s)') ...sleeping 20 s: (False, 'stagger (499s/1800s)') ...sleeping 20 s: (False, 'stagger (520s/1800s)') ...sleeping 20 s: (False, 'stagger (540s/1800s)') ...sleeping 20 s: (False, 'stagger (561s/1800s)') ...sleeping 20 s: (False, 'stagger (582s/1800s)') ...sleeping 20 s: (False, 'stagger (603s/1800s)') ...sleeping 20 s: (False, 'stagger (624s/1800s)') ...sleeping 20 s: (False, 'stagger (644s/1800s)') ...sleeping 20 s: (False, 'stagger (665s/1800s)') ...sleeping 20 s: (False, 'stagger (686s/1800s)') ...sleeping 20 s: (False, 'stagger (707s/1800s)') ...sleeping 20 s: (False, 'stagger (727s/1800s)') ...sleeping 20 s: (False, 'stagger (748s/1800s)') ...sleeping 20 s: (False, 'stagger (769s/1800s)') ...sleeping 20 s: (False, 'stagger (790s/1800s)') ...sleeping 20 s: (False, 'stagger (810s/1800s)') ...sleeping 20 s: (False, 'stagger (831s/1800s)') ...sleeping 20 s: (False, 'stagger (852s/1800s)') ...sleeping 20 s: (False, 'stagger (873s/1800s)') ...sleeping 20 s: (False, 'stagger (893s/1800s)') ...sleeping 20 s: (False, 'stagger (914s/1800s)') ...sleeping 20 s: (False, 'stagger (935s/1800s)') ...sleeping 20 s: (False, 'stagger (955s/1800s)') ...sleeping 20 s: (False, 'stagger (976s/1800s)') ...sleeping 20 s: (False, 'stagger (997s/1800s)') ...sleeping 20 s: (False, 'stagger (1017s/1800s)') ...sleeping 20 s: (False, 'stagger (1038s/1800s)') ...sleeping 20 s: (False, 'stagger (1059s/1800s)') ...sleeping 20 s: (False, 'stagger (1080s/1800s)') ...sleeping 20 s: (False, 'stagger (1101s/1800s)') ...sleeping 20 s: (False, 'stagger (1121s/1800s)') ...sleeping 20 s: (False, 'stagger (1142s/1800s)') ...sleeping 20 s: (False, 'stagger (1163s/1800s)') ...sleeping 20 s: (False, 'stagger (1184s/1800s)') ...sleeping 20 s: (False, 'stagger (1204s/1800s)') ...sleeping 20 s: (False, 'stagger (1225s/1800s)') ...sleeping 20 s: (False, 'stagger (1246s/1800s)') ...sleeping 20 s: (False, 'stagger (1267s/1800s)') ...sleeping 20 s: (False, 'stagger (1287s/1800s)') ...sleeping 20 s: (False, 'stagger (1308s/1800s)') ...sleeping 20 s: (False, 'stagger (1329s/1800s)') ...sleeping 20 s: (False, 'stagger (1350s/1800s)') ...sleeping 20 s: (False, 'stagger (1370s/1800s)') ...sleeping 20 s: (False, 'stagger (1391s/1800s)') ...sleeping 20 s: (False, 'stagger (1412s/1800s)') ...sleeping 20 s: (False, 'stagger (1433s/1800s)') ...sleeping 20 s: (False, 'stagger (1453s/1800s)') ...sleeping 20 s: (False, 'stagger (1474s/1800s)') ...sleeping 20 s: (False, 'stagger (1495s/1800s)') ...sleeping 20 s: (False, 'stagger (1516s/1800s)') ...sleeping 20 s: (False, 'stagger (1536s/1800s)') ...sleeping 20 s: (False, 'stagger (1558s/1800s)') ...sleeping 20 s: (False, 'stagger (1579s/1800s)') ...sleeping 20 s: (False, 'stagger (1599s/1800s)') ...sleeping 20 s: (False, 'stagger (1620s/1800s)') ...sleeping 20 s: (False, 'stagger (1642s/1800s)') ...sleeping 20 s: (False, 'stagger (1663s/1800s)') ...sleeping 20 s: (False, 'stagger (1684s/1800s)') ...sleeping 20 s: (False, 'stagger (1704s/1800s)') ...sleeping 20 s: (False, 'stagger (1725s/1800s)') ...sleeping 20 s: (False, 'stagger (1746s/1800s)') ...sleeping 20 s: (False, 'stagger (1769s/1800s)') ...sleeping 20 s: (False, 'stagger (1790s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (1811s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (1831s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (1852s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (1873s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (1894s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (1915s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (1935s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (1956s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (1977s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (1998s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2019s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2039s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2060s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2081s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2102s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2123s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2144s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2165s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2186s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2207s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2228s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2249s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2270s/1800s)') ...sleeping 20 s: (False, 'max jobs (6) - (2290s/1800s)')Terminated
  21. Is there a guide on how to setup and run a server - would love to host one. This brings back great memories.
  22. That looks really cool - would be awesome to stop burning NVME drives! Here's hoping.
  23. @mcai3db3 - thanks, that's what I missed. So I guess I have to run that each time I restart unraid then as well? Still hoping for undervolting ability as would drop temps a lot, but at least I can run - appreciate you. edit: is it possible to set fan settings manually? would like 100% to keep memory cool
  24. Came to try this after no success in Phoenix miner with power control. Might be same problem here I guess. My gtx3080 will only run full power which overheats and is not efficient. I really REALLY hope someone can get overclocks to work but I'd settle for power limit if I could at least get it to work. Based on what was posted above, I opened a terminal in unraid (not the docker) and typed the following: nvidia-smi -i 0 -pl 240 The system reply was: Power limit for GPU 00000000:21:00.0 was set to 240.00 W from 370.00 W. Warning: persistence mode is disabled on device 00000000:21:00.0. See the Known Issues section of the nvidia-smi(1) man page for more information. Run with [--help | -h] switch to get more information on how to enable persistence mode. All done. However when I then ran this t-rex docker it is showing use of 291W (see below): Mining at eth-us-east.flexpool.io:5555, diff: 4.00 G [0;97mGPU #0: [0m[0;97mGigabyte NVIDIA RTX 3080 - 87.29 MH/s, [[0m[0;97mT:[0m[0;91m88[0m[0;97mC, [0m[0;97mP:291W, [0m[0;97mF:100%, [0m[0;97mE:301kH/W[0m[0;97m][0m[0;97m, 6/6 R:[0m[0;97m0[0m[0;97m%[0m [0mShares/min: 3.068 (Avr. 2.25) Uptime: 3 mins 51 secs | Algo: ethash | T-Rex v0.20.4 WD: 3 mins 52 secs, shares: 6/6 Did I miss something?