Jump to content

rik3k

Members
  • Posts

    11
  • Joined

  • Last visited

Posts posted by rik3k

  1. 21 minutes ago, JorgeB said:

    Might not be the HBA, could be the enclosure or the HBA link speed, you can see here typical results for various HBAs.

     

    It's not, what speed do you get during a parity check?

     

    Same 20MB/s during parity check. Writing to drives off parity-protected array gives me 150/160MB/s pretty consistently. 

     

    I have a 2308 chipset HBA so I should be able to get much higher. It's really somehow tied to parity writes. CPU is barely touching 3% during transfers and plenty ram available.

     

    I did see a lot of documentation on the tunables but these seemed to apply to previous versions of Unraid. Newer version have less parameters and they don't seem to make much of an impact.

  2. 14 minutes ago, JorgeB said:

    Default writing mode with can be around that with older disks, you can try turbo write, but note that with so many disks you might run into a controller bottleneck, depends on the controller/enclosure you're using, it can never be faster than a parity check at any point.

     

    Turbo write enbaled/disabled also makes no difference. Unless a reboot is required to enable?

     

    Are you suggesting that perhaps I split the 'load' across multiple HBAs?

  3. I have a dual parity array with 27 data disks (29 drives total). I simply cannot get the write speed to the array to go above 20mb/s. When I write to a cache drive (without parity) I have no issues. Tried disabling dockers/VMs, even tried single parity and the result is the same. Swapped out HBA cards, tried SATA parity vs SAS. Result is always the same. Tried playing around with Tunables but there does not seem to be an impact.

     

    Write caching enabled on all drives as well. Any thoughts?

     

     

    sodium-diagnostics-20210825-0828.zip

  4.  

    On 8/18/2021 at 11:12 AM, guy.davis said:

     

    Glad to see you got this sorted out on our Discord server.  For others on this thred, the suggestion was to use the following Plotman settings:

     

            tmpdir_stagger_phase_major: 5
            tmpdir_stagger_phase_minor: 0
            # Optional: default is 1
            tmpdir_stagger_phase_limit: 1
    
            # Don't run more than this many jobs at a time on a single temp dir.
            # Increase for staggered plotting by chia, leave at 1 for madmax sequential plotting
            tmpdir_max_jobs: 2
    
            # Don't run more than this many jobs at a time in total.
            # Increase for staggered plotting by chia, leave at 1 for madmax sequential plotting
            global_max_jobs: 2

     

    Thanks again to user doma for this great improvement to Plotman.

     

    I have been running into this same issue but ultimately I am still bottlenecked by the transfer process. I plot in approx 45 minutes but the transfer process takes a bit longer (around 20MB/s). When I complete a 2nd plot, I now have Unraid transferring two plots on the same 20MB/s which slows everything down again.

     

    The ideal solution would be for Unraid to select a 2nd drive for the 2nd plot. Currently it always seems to pick the same drive for both file transfers. Any guidance on how to get Unraid to automatically pick another drive?

  5. Suddenly I’m encountering a lot of ‘Seeking plots took too long’ issues and I really can’t pinpoint where they are coming from. CPU/RAM is not an issue as neither are anywhere close to maxed out. I shut off everything else (VM/dockers) and still have the same issue. Restarted the container. Drive read speeds don’t appear problematic. Logs don’t give much info except simply ‘plots took too long’. Any suggestions on where to look next?

     

    EDIT: OK I have found my issue. I plot with madmax in ram/nvme but phase 5:0 was holding me up. I am using an SSD cache on my Chiaplot share to speed up the process. However, I can only copy from my cache to the main array at approx 20mb/s, which is slower than I plot. Over time my cache fills up. If I disable cache completely, I get 50mb/s from my nvme direct to array.

     

    I solved this by allowing a 2nd plot to start at phase 5 but I wonder what is causing this behavior. I thought it was due to the container polling the plots on the cache drive but this speed persists even with the docker container shut down.

  6. 11 hours ago, guy.davis said:

    Yeah, sorry again for this.  To fully correct, you want your `plots_dir` to have Key: plots_dir  (not /plots).  Like this:

     

     

    Hey, thanks for replying and also thanks for developing this. For the record, the 'culprit' is the 2nd entry of plots_dir which doesn't have an edit button. That being said, I did change this one to the correct value (along with the first one). I also had to go into config.yaml and and manually add the 2nd folder. Now I have this in there:

     

    plot_directories:
      - /plotting2
      - /plots
      - /plots,/plotting2

     

    Everything works as intended now and I do see everything properly listed on the farming page. Thanks!

     

    image.thumb.png.fd301f2a89793370010d45d67577b56a.png

    • Like 1
  7. 2 hours ago, guy.davis said:

     

    Hi, see the Wiki and set 'plots_dir'.  That should help, watch out for duplicates of 'plots_dir' in Unraid's Edit window.


    thanks. I did look around and took note of the issue with duplicate variables. I have ‘plots_dir’ twice but when I run it, the command outputs to -e ‘/plots’=‘/plots:/plots2’. Somehow it’s dropping the _dir.

     

    I still seem to have the issue but it’s not a big deal, I think it’s farming properly still.

  8. SMB issues are pretty common on W10 but this one is pretty new to me.

     

    I have an SMB share enabled and within this share, I can access and write to specific folders without issues, but get an error for others when trying to access. I have 8 folders in the share root, 4 of which I have full access to and the other 4 I simply get a 'Windows cannot access' error.

     

    Has anyone run into this issue before?

     

    EDIT: posted too fast - I had inconsistent permissions at the folder level. Pretty obvious, my mistake.

×
×
  • Create New...