DieFalse

Members
  • Posts

    432
  • Joined

  • Last visited

  • Days Won

    1

DieFalse last won the day on April 22 2018

DieFalse had the most liked content!

About DieFalse

  • Birthday September 26

Converted

  • Gender
    Male

Recent Profile Visitors

3342 profile views

DieFalse's Achievements

Enthusiast

Enthusiast (6/14)

33

Reputation

  1. Ok, I fixed ident.cfg enabling ssl and ssh and renamed it. I am now back in my server. It 100% seems like ident.cfg is the only file that changed..... Could this be somehow tied to the recent update and something caused ident.cfg to default itself? All my assignments, pools, anything in other configs (VM's dockers etc) is 100% correct as far as I can tell. @JorgeB that docker is expected and correct.
  2. I would like the ability to download more than the current USB backup from unraid connect. My server lost some config files during a hard power issue and when it rebooted with internet access it uploaded a new backup. Downloading this backup was fruitless. If I was able to download say even two copies the older one would be correct. I know this is storage required by unraid and local backups are preferred but this feature is a heavy ask and would help more than me I am sure.
  3. I went to download the backup from unraid connect and it seems the "Tower" config uploaded itself and I can not download a previous backup from there. The config files all seem correct in my reviews except the ident.cfg having "Tower". I can not see where or why nginx wont load.
  4. My power went out and my UPS batteries failed, new ones are supposed to be here soon, but the server had a hard stop. I expected file corruption but not this. The server is unreachable via HTTP/HTTPS/SSH. It thinks its "Tower" instead of "blinky". It remembers its authentication during a local login. It remembers its IP assignment (I cant recall if I reserved this IP in my firewall though). GUI mode remains a black screen. Shell login allows me to login and pull the diagnostics. Web UI is 404 not found. tower-diagnostics-20240411-0948.zip
  5. This worked, I built 3x4 mirror pools, tested each until I found the slow down, located in in one test group. then isolated to two drives so far. I pulled the two drives as I have two spares, built my RaidZ2 1x12 Pool and am now getting 250MB/s+ on transfer, much more acceptable given I have 101TB of data to transfer before I can format the original 12 drives and add as the second 12 pool making it 2x12 RaidZ2. Once thats done, I will test the two drives individually and return the culprit(s) to ensure I have spares on hand.
  6. OK. I am feeling like this was a HORRIBLE idea. My ZFS pool 1x12 RaidZ2 is only getting ~10MB/s write speeds. This is way lower than I am used to, and ZFS was chosen / previously suggested for speed since I am using matching disks. Is this a known and fixable issue? Also, it appears "Sync Filesystem" sticks for a long time when stopping the array or rebooting. I don't think it actually finishes. blinky-diagnostics-20240320-2331.zip
  7. Hello I have been running 12x12TB sas3 drives on sas2 enclosures for a while now, I have also been using xfs since I didn't have anywhere to offload the data to try zfs etc. I now have delivery today of 12x12tb sas3 drives that I plan to setup zfs and then move the data to them, then add the existing drives to zfs to expand storage to 24x12tb drives. In a week I will be receiving enclosures for sas3 compatability. Now it's sas3 cards, sas3-sas2 cables, sas2 enclosures with sas3 drives. It will be sas3 all the way to the drives next week. Is zfs the way to go? Is my plan the best path? Anything I'm not thinking about?
  8. Won't I lose throughput by dropping to a single sas connector to the enclosure?
  9. Main rig has 368Gb ddr4 ecc. Secondary has 256Gb ddr4 ecc Lab has 512Gb ddr3 ecc.
  10. I have lost my filesystem and shares four times in the last three days, I am suspicious of potentional hardware failure causing it but can not pinpoint the issue. Rebooting restores and the array goes back to normal, this only occurs when the filesystem is being hammered by more than 1.6Gbps downloads. Diagnostics attached, please help. blinky-diagnostics-20240309-0419.zip
  11. Definitely start a support thread and post your diagnostics in it.
  12. VNSTAT is needed for Network Statistics to function correctly and it is not currently startable on my servers: "vnstat service must be running STARTED to view network stats." Please keep it. "root@Arcanine:~# vnstat Error: Database "/var/lib/vnstat//vnstat.db" contains 0 bytes and isn't a valid database, exiting." root@Arcanine:~# vnstat Error: Failed to open database "/var/lib/vnstat//vnstat.db" in read-only mode. The vnStat daemon should have created the database when started. Check that it is configured and running. See also "man vnstatd". root@Arcanine:~# vnstatd -d Error: Not enough free diskspace available in "/var/lib/vnstat/", exiting.
  13. Ok, After some research, I have acquired 4x MD32 controllers (MD3200 is dual link). To benefit now as well as future upgradability to 12Gb/s; A. I will likely configure as follows: 1x 9300-8E HBA to 1x MD3200 to MD1200 1x 9300-8E HBA to 1x MD3200 to MD1200 So my server will have two 8E cards, with 2x SFF-8644 to SFF-8088 cables connecting the HBA to the MD3200 and one SFF-8088 connecting the MD3200 to the MD1200. B. My alternative would be all 4 MD's connected to the LSI-SAS-6160, and then HBA's connecting to the 6160 also. This would keep any device from being chained, and allow dual link to the switch, dual links to the md3200's and single link to the md1200's. The LSI-SAS-6160 is basically an external expander in simple terms, that has multi-path and can even connect multiple hosts to multiple DAS/SAN's. At this point I am looking for 2x 9300-8e cards. (I don't think a 16e would benefit and from my understanding two 8e's would eliminate bottlenecking, especially when I later upgrade to a 12Gb/s chassis). If you confirm that my options are good, and if you think B is better than A, let me know. I trust your judgement way more than my own and your help was invaluable to my last build (42bay chassis).
  14. Thanks for looking into this. The MD1200's do daisy chain, up to 10 in a chain. and right now I think the way it works is 6 drives per port on controller. My current H800 P1 would be drives 1-6 on md1200(1) and 1-6 on MD1200(2) and then P2 would be drives 7-12 on each. Im thinking stacking the two new ones would not benefit me, and I would need a different card / cards. My R720XD can handle PCIe3 x8/x16 easily with multiple cards. Do you have a PCIe3 HBA card / cards recommendation? With my risers I can have 3x full height and 3x low profile. only one low profile is populated with a GPU right now. (FC16 HBA Full Height) Also - have you any thoughts on using a SAS Switch as the intermediary between HBAs? LSI SAS 6160?
  15. Sorry, should have added this, the drives may be 12gbs but the controller and md1200s are 6gbs. So my max bandwidth will be controlled by the card(s) and chassis. Would it be best to have 2xmd1200 on one card and 2x on the other or all 4 on one card?