BlinkerFluid

Members
  • Posts

    37
  • Joined

  • Last visited

Everything posted by BlinkerFluid

  1. yeah I'm not sure. I'd definitely try it in a totally different pc even if i had to use my main windows box and see if you can see it post. Have you updated your bios on your motherboard to the most recent one? Sometimes that can have an impact with an older board especially with UIFI stuff but that would be a board about 7 years old
  2. I think you may have controller issues on multiple drives. Check cables/pcie card seating, power supply issues? Is your sas card in IT mode? Your read error rate/ecc corrected error rate is extremely high on multiple drives and you have all kinds of sata link notifications in your syslog files which makes it hard to find other entries In your smart drive data, disk 3 and disk 1 look extremely similar and seem to both be passing smart tests. However disk 2 which is the same age/make/etc has 0 read errors and 0 ecc corrected. I can let some more seasoned unraid users chime in. Also as old as your drives are you should probably add a 2nd parity because your likelihood of more than 1 drive failing is higher
  3. You should at least see it post on boot and in the bios. My first thought is the same as Squids. You probably have a disabled pcie slot due to either using a bunch of nvme drives that replace some slots or from some setting in the bios that disables it. First thing to do is consult your motherboard manual. If that isn't it, do you have another pc to put it in? It is possible the hardware has failed but unlikely I'm running a mellanox connect x-3 341A 10GB sfp+ without issues on unraid so it should be supported once you get it figured out "Ethernet controller: Mellanox Technologies MT27500 Family [ConnectX-3]"
  4. I have been super happy with a HGST 4U60 since 2016. 60 sas/sata drive slots in a 4u footprint and has an 80 watt idle with no drives if only using 1 of the psus. 12GB sas as well. There are some newer versions that are pretty much the same. I see them pop up on ebay occasionally. You probably would also want some kind of rack for it. It gets very heavy with drives I did have to plan my server rack around having 240v power hookup as it is 240v only for power. There is one small benefit, 240v power is 3% more efficient than 120v so a little drop in the power savings bucket. That was most of a day running new electrical diy for that. But it has run flawlessly for years and years.
  5. spaceinvader just posted a way to script this a few weeks and it works. Pre-emptively did this for my nvidia card when installing unraid last week - https://www.youtube.com/watch?v=KD6G-tpsyKw
  6. it'll work as long as you flash the perc h200 to IT mode Any reason you are using 3tb sas drives? If you are buying them, you would be much better off buying 10tb sas drives. There is an ebay listing for $99 for used 10tb hgst sas drives
  7. you should count yourself lucky if it doesn't end up frying your motherboard and drives. If something is too good to be true, most of the time it is
  8. Enterprise SAS drives when possible. WD Ultrastar line or HGST lately
  9. Ok that makes sense just booting into unraid temporarily then booting back instead of trying to do the formatting manually. I have enough empty i can do 5 at a time that way. If i didn't have them in a hgst 4u60 and a bunch of them being sas drives it would be a lot easier
  10. Looking for the correct way to format XFS drives so i can just drop them into a new Unraid array and after adding them turn on parity. I'm trying to avoid having to move all my data around in unraid. I have 130ishTB on a bunch of drives. I would have a lot of down time while moving the data from unassigned devices over to the array as it seems like I cant pool/merge/union the unassigned devices right? If I format drives correctly and move the data over before going to unraid I should be able to just configure all the plugins/dockers, add two parity drives and be up and running Currently my research suggests if i set the spinning disks in gdisk partitioning with sector size of 64 and then to starting at sector 64 instead of the default 2048 that is how Unraid likes it. I then format it as xfs and label the drive. Seems to look right, however when i do this i get a superblock issue and can't then mount the drive due to a superblock issue. Seems like the superblock resides in the first 512 blocks. What am I missing? has anyone manually partitioned and formatted drives in other OS's for use in Unraid and what did you have to do?
  11. It definitely would if there were multiple fast ssd drives striped or in raid 0. It really just depends on how fast your transfer rates are from your storage to your system. In a home environment? probably not