DieFalse

Members
  • Posts

    428
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by DieFalse

  1. This worked,  I built 3x4 mirror pools, tested each until I found the slow down,  located in in one test group.   then isolated to two drives so far.   I pulled the two drives as I have two spares, built my RaidZ2 1x12 Pool and am now getting 250MB/s+ on transfer, much more acceptable given I have 101TB of data to transfer before I can format the original 12 drives and add as the second 12 pool making it 2x12 RaidZ2.

     

    Once thats done, I will test the two drives individually and return the culprit(s) to ensure I have spares on hand.

     

    • Like 1
  2. OK.   I am feeling like this was a HORRIBLE idea.

     

    My ZFS pool 1x12 RaidZ2 is only getting ~10MB/s write speeds.    This is way lower than I am used to, and ZFS was chosen / previously suggested for speed since I am using matching disks.

     

    Is this a known and fixable issue?   

     

    Also,  it appears "Sync Filesystem" sticks for a long time when stopping the array or rebooting.    I don't think it actually finishes.

    blinky-diagnostics-20240320-2331.zip

  3. Hello

     

    I have been running 12x12TB sas3 drives on sas2 enclosures for a while now, I have also been using xfs since I didn't have anywhere to offload the data to try zfs etc.   I now have delivery today of 12x12tb sas3 drives that I plan to setup zfs and then move the data to them, then add the existing drives to zfs to expand storage to 24x12tb drives.

     

    In a week I will be receiving enclosures for sas3 compatability.

     

    Now it's sas3 cards, sas3-sas2 cables, sas2 enclosures with sas3 drives.

     

    It will be sas3 all the way to the drives next week.

     

    Is zfs the way to go?  Is my plan the best path?  Anything I'm not thinking about?

     

  4. Ok, 

     

    After some research,  I have acquired 4x MD32 controllers (MD3200 is dual link).  To benefit now as well as future upgradability to 12Gb/s; 

     

    A. I will likely configure as follows:

     

    1x 9300-8E HBA

    to

    1x MD3200

    to 

    MD1200

     

    1x 9300-8E HBA

    to

    1x MD3200

    to 

    MD1200

     

    So my server will have two 8E cards, with 2x SFF-8644 to SFF-8088 cables connecting the HBA to the MD3200 and one SFF-8088 connecting the MD3200 to the MD1200.

     

    B. My alternative would be all 4 MD's connected to the LSI-SAS-6160, and then HBA's connecting to the 6160 also.  This would keep any device from being chained, and allow dual link to the switch, dual links to the md3200's and single link to the md1200's.  The LSI-SAS-6160 is basically an external expander in simple terms, that has multi-path and can even connect multiple hosts to multiple DAS/SAN's.

     

    At this point I am looking for 2x 9300-8e cards.  (I don't think a 16e would benefit and from my understanding two 8e's would eliminate bottlenecking, especially when I later upgrade to a 12Gb/s chassis).

     

    If you confirm that my options are good, and if you think B is better than A, let me know.  I trust your judgement way more than my own and your help was invaluable to my last build (42bay chassis).

  5. 52 minutes ago, JorgeB said:

    Yes, they will work like SAS2 drives.

     

    Took a quick look and they don't support dual link, there's an out for daisy chain or you can use the second module for redundancy (not supported by Unraid), so for best performance you want to connect each MD1200 to a SAS wide port on the HBA, so get a second HBA or one with 4 ports, bottleneck then will be the PCIe 2.0 slot of the H800, if the board/CPU supports PCIe 3.0 it would be faster with one or two PCIe 3.0 HBAs.

     

     

     

    Thanks for looking into this. The MD1200's do daisy chain,  up to 10 in a chain. and right now I think the way it works is 6 drives per port on controller.  

    My current H800 P1 would be drives 1-6 on md1200(1) and 1-6 on MD1200(2) and then P2 would be drives 7-12 on each.   Im thinking stacking the two new ones would not benefit me, and I would need a different card / cards.

     

    My R720XD can handle PCIe3 x8/x16 easily with multiple cards.   

     

    Do you have a PCIe3 HBA card / cards recommendation?   With my risers I can have 3x full height and 3x low profile.  only one low profile is populated with a GPU right now.

    (FC16 HBA Full Height)

     

    Also - have you any thoughts on using a SAS Switch as the intermediary between HBAs?  LSI SAS 6160?

  6. Background, I currently have 2x MD1200's with 2TB drives - connected to an R720XD via Perc H800

     

    I am replacing the drives and adding two more MD1200s to the chain. Drives will all be 12TB 12GB/s SAS3. (48 drives total) 

     

    1. Would I be better off replacing the H800 with a different controller or adding a second card?

    2. What would be the optimal connection method/card to enable max speed capable out of the 12Gb/s drives?

     

    Most will be the array, the others split between unassigned and cache pools.

     

  7. Ok - I found a MUCH easier way.....

     

    After making the changes to goaccess.conf to be:

    time-format %T
    date-format %d/%b/%Y
    log_format [%d:%t %^] %^ %^ %s - %m %^ %v "%U" [Client %h] [Length %b] [Gzip %^] [Sent-to %^] "%u" "%R"
    
    
    
    log-file /opt/log/proxy_logs.log

     

     

    Simply add the following line to each proxy host in NGINX Proxy Manager - Official "advanced" 

    access_log /data/logs/proxy_logs.log proxy;

     

    like so:  (if you already have advanced stuff here, add the line to the VERY top)

    image.thumb.png.6333e6106ec7181d1d3c0bec06681fa4.png

     

    Now they all log to the same file, and same format,  simply add the line to all proxy_hosts and remember to add it to any new ones.

    • Like 4
    • Thanks 2
  8. 22 minutes ago, fmp4m said:

    Here is the log format that NGINX Proxy Manager - Official uses for:

    Proxy hosts:

    '[$time_local] $upstream_cache_status $upstream_status $status - $request_method $scheme $host "$request_uri" [Client $remote_addr] [Length $body_bytes_sent] [Gzip $gzip_ratio] [Sent-to $server] "$http_user_agent" "$http_referer"'

     

    standard:

    '[$time_local] $status - $request_method $scheme $host "$request_uri" [Client $remote_addr] [Length $body_bytes_sent] [Gzip $gzip_ratio] "$http_user_agent" "$http_referer"'

     

    Someone that knows the variables for goaccess, will need to convert the "proxy" one.

     

     

    Nevermind,  I got it:

     

    goaccess.conf:

    comment out the existing time/date/log formats and add this:

     

    time-format %T
    date-format %d/%b/%Y
    log_format [%d:%t %^] %^ %^ %s - %m %^ %v "%U" [Client %h] [Length %b] [Gzip %^] [Sent-to %^] "%u" "%R"
     

    Then add under the log file your list of proxy-host-log files like so: (note this is my list and is not the same as your list, find this in your NGINX Proxy Manager - Official appdata logs and add each you want to track.

     

    log-file /opt/log/proxy-host-12_access.log
    log-file /opt/log/proxy-host-13_access.log
    log-file /opt/log/proxy-host-14_access.log
    log-file /opt/log/proxy-host-15_access.log
    log-file /opt/log/proxy-host-3_access.log
    log-file /opt/log/proxy-host-4_access.log
    log-file /opt/log/proxy-host-5_access.log
    log-file /opt/log/proxy-host-6_access.log
    log-file /opt/log/proxy-host-8_access.log
    log-file /opt/log/proxy-host-9_access.log

  9. Here is the log format that NGINX Proxy Manager - Official uses for:

    Proxy hosts:

    '[$time_local] $upstream_cache_status $upstream_status $status - $request_method $scheme $host "$request_uri" [Client $remote_addr] [Length $body_bytes_sent] [Gzip $gzip_ratio] [Sent-to $server] "$http_user_agent" "$http_referer"'

     

    standard:

    '[$time_local] $status - $request_method $scheme $host "$request_uri" [Client $remote_addr] [Length $body_bytes_sent] [Gzip $gzip_ratio] "$http_user_agent" "$http_referer"'

     

    Someone that knows the variables for goaccess, will need to convert the "proxy" one.

  10. 15 minutes ago, mgutt said:

    Additional rules can only be applied through the advanced tab of a host. You can't modify the files directly.

    I don't need to add additional rules, the log_format is invalid for Argo Tunnel
    I can not modify the log_format via advanced.

     

    I was able to find the files in console shell for the container (which will retain changes until I update the docker). debian-buster slim is nice.

  11. I spent time with this and got several things to work, but it seems with the Nginx Proxy Manager - Official container, the log files are different between fallback_access.log and the proxy host logs.  Adding them all will not work and missing info will occur since you can have either all the proxy host logs OR fallback_access since they're formatted differently.

     

    As for the permissions, use a folder or share not in NginxProxyManager/GoAccess'es appdata and it will allow it to read correctly.  I made a share called Logs that I use for various logging and mapped to /mnt/user/Logs/NPM/

  12. Note to all that come here for help with Argo Tunnel config.  

     

    You should not use "noTLSVerify: true" for anything other than troubleshooting in your config.yaml.  It is less safe to leave this way.  

    If you are having issues that this resolves in troubleshooting, It is fixable to be secure, don't stop there and use it just because it works.

     

    Tips:

     

    originServerName: domain.com 

    ^ rarely works correctly,  instead use:

    originServerName: subdomain.domain.com 

    ^ use this that has a VALID CNAME record pointed to the root of the domain "@"

     

    In my example config here:

     

    tunnel: XXX
    credentials-file: XXX.json
    
    ingress:
    - service: https://proxysdockerip:18443
      originRequest:
    	originServerName: service.domain.ext

     

    proxydockerip can be the docker name if you are using a custom docker network, or the IP of the docker that serves as your reverse proxy, like SWAG or NPM.

    service.domain.dom is a valid CNAME of "service" pointed to "@" in the DNS of "domain.dom".

    This allows cloudflared / CF Argo Tunnel to validate correctly.

     

     

     

    • Like 2
  13. 32 minutes ago, mkono87 said:

    I did receive an error notification for the docker driver with an uncorrected 1 status. I did a smart test after it finished the parity check. (I know not related). I did get a successful smart extended test, is this not to be trusted in this case?

    Sent from my Mi 9T using Tapatalk
     

     

    SMART checkes drives health - not data health.  You have corruption and will need to repair the corruption.