DieFalse

Members
  • Posts

    426
  • Joined

  • Last visited

  • Days Won

    1

DieFalse last won the day on April 22 2018

DieFalse had the most liked content!

About DieFalse

  • Birthday September 26

Converted

  • Gender
    Male

Recent Profile Visitors

3277 profile views

DieFalse's Achievements

Enthusiast

Enthusiast (6/14)

32

Reputation

  1. Hello I have been running 12x12TB sas3 drives on sas2 enclosures for a while now, I have also been using xfs since I didn't have anywhere to offload the data to try zfs etc. I now have delivery today of 12x12tb sas3 drives that I plan to setup zfs and then move the data to them, then add the existing drives to zfs to expand storage to 24x12tb drives. In a week I will be receiving enclosures for sas3 compatability. Now it's sas3 cards, sas3-sas2 cables, sas2 enclosures with sas3 drives. It will be sas3 all the way to the drives next week. Is zfs the way to go? Is my plan the best path? Anything I'm not thinking about?
  2. Won't I lose throughput by dropping to a single sas connector to the enclosure?
  3. Main rig has 368Gb ddr4 ecc. Secondary has 256Gb ddr4 ecc Lab has 512Gb ddr3 ecc.
  4. I have lost my filesystem and shares four times in the last three days, I am suspicious of potentional hardware failure causing it but can not pinpoint the issue. Rebooting restores and the array goes back to normal, this only occurs when the filesystem is being hammered by more than 1.6Gbps downloads. Diagnostics attached, please help. blinky-diagnostics-20240309-0419.zip
  5. Definitely start a support thread and post your diagnostics in it.
  6. VNSTAT is needed for Network Statistics to function correctly and it is not currently startable on my servers: "vnstat service must be running STARTED to view network stats." Please keep it. "root@Arcanine:~# vnstat Error: Database "/var/lib/vnstat//vnstat.db" contains 0 bytes and isn't a valid database, exiting." root@Arcanine:~# vnstat Error: Failed to open database "/var/lib/vnstat//vnstat.db" in read-only mode. The vnStat daemon should have created the database when started. Check that it is configured and running. See also "man vnstatd". root@Arcanine:~# vnstatd -d Error: Not enough free diskspace available in "/var/lib/vnstat/", exiting.
  7. Ok, After some research, I have acquired 4x MD32 controllers (MD3200 is dual link). To benefit now as well as future upgradability to 12Gb/s; A. I will likely configure as follows: 1x 9300-8E HBA to 1x MD3200 to MD1200 1x 9300-8E HBA to 1x MD3200 to MD1200 So my server will have two 8E cards, with 2x SFF-8644 to SFF-8088 cables connecting the HBA to the MD3200 and one SFF-8088 connecting the MD3200 to the MD1200. B. My alternative would be all 4 MD's connected to the LSI-SAS-6160, and then HBA's connecting to the 6160 also. This would keep any device from being chained, and allow dual link to the switch, dual links to the md3200's and single link to the md1200's. The LSI-SAS-6160 is basically an external expander in simple terms, that has multi-path and can even connect multiple hosts to multiple DAS/SAN's. At this point I am looking for 2x 9300-8e cards. (I don't think a 16e would benefit and from my understanding two 8e's would eliminate bottlenecking, especially when I later upgrade to a 12Gb/s chassis). If you confirm that my options are good, and if you think B is better than A, let me know. I trust your judgement way more than my own and your help was invaluable to my last build (42bay chassis).
  8. Thanks for looking into this. The MD1200's do daisy chain, up to 10 in a chain. and right now I think the way it works is 6 drives per port on controller. My current H800 P1 would be drives 1-6 on md1200(1) and 1-6 on MD1200(2) and then P2 would be drives 7-12 on each. Im thinking stacking the two new ones would not benefit me, and I would need a different card / cards. My R720XD can handle PCIe3 x8/x16 easily with multiple cards. Do you have a PCIe3 HBA card / cards recommendation? With my risers I can have 3x full height and 3x low profile. only one low profile is populated with a GPU right now. (FC16 HBA Full Height) Also - have you any thoughts on using a SAS Switch as the intermediary between HBAs? LSI SAS 6160?
  9. Sorry, should have added this, the drives may be 12gbs but the controller and md1200s are 6gbs. So my max bandwidth will be controlled by the card(s) and chassis. Would it be best to have 2xmd1200 on one card and 2x on the other or all 4 on one card?
  10. Background, I currently have 2x MD1200's with 2TB drives - connected to an R720XD via Perc H800 I am replacing the drives and adding two more MD1200s to the chain. Drives will all be 12TB 12GB/s SAS3. (48 drives total) 1. Would I be better off replacing the H800 with a different controller or adding a second card? 2. What would be the optimal connection method/card to enable max speed capable out of the 12Gb/s drives? Most will be the array, the others split between unassigned and cache pools.
  11. Ok - I found a MUCH easier way..... After making the changes to goaccess.conf to be: time-format %T date-format %d/%b/%Y log_format [%d:%t %^] %^ %^ %s - %m %^ %v "%U" [Client %h] [Length %b] [Gzip %^] [Sent-to %^] "%u" "%R" log-file /opt/log/proxy_logs.log Simply add the following line to each proxy host in NGINX Proxy Manager - Official "advanced" access_log /data/logs/proxy_logs.log proxy; like so: (if you already have advanced stuff here, add the line to the VERY top) Now they all log to the same file, and same format, simply add the line to all proxy_hosts and remember to add it to any new ones.
  12. Nevermind, I got it: goaccess.conf: comment out the existing time/date/log formats and add this: time-format %T date-format %d/%b/%Y log_format [%d:%t %^] %^ %^ %s - %m %^ %v "%U" [Client %h] [Length %b] [Gzip %^] [Sent-to %^] "%u" "%R" Then add under the log file your list of proxy-host-log files like so: (note this is my list and is not the same as your list, find this in your NGINX Proxy Manager - Official appdata logs and add each you want to track. log-file /opt/log/proxy-host-12_access.log log-file /opt/log/proxy-host-13_access.log log-file /opt/log/proxy-host-14_access.log log-file /opt/log/proxy-host-15_access.log log-file /opt/log/proxy-host-3_access.log log-file /opt/log/proxy-host-4_access.log log-file /opt/log/proxy-host-5_access.log log-file /opt/log/proxy-host-6_access.log log-file /opt/log/proxy-host-8_access.log log-file /opt/log/proxy-host-9_access.log
  13. Here is the log format that NGINX Proxy Manager - Official uses for: Proxy hosts: '[$time_local] $upstream_cache_status $upstream_status $status - $request_method $scheme $host "$request_uri" [Client $remote_addr] [Length $body_bytes_sent] [Gzip $gzip_ratio] [Sent-to $server] "$http_user_agent" "$http_referer"' standard: '[$time_local] $status - $request_method $scheme $host "$request_uri" [Client $remote_addr] [Length $body_bytes_sent] [Gzip $gzip_ratio] "$http_user_agent" "$http_referer"' Someone that knows the variables for goaccess, will need to convert the "proxy" one.
  14. This is a great thought - I shall try that! thanks!