BlinkerFluid

Members
  • Posts

    37
  • Joined

  • Last visited

Everything posted by BlinkerFluid

  1. I and most others have used the mellanox connect x3. You can find dual sfp+ versions on ebay cheap. Why would you want to spend double or triple for one that will work the same? If you want to see if it supports, check the linux kernel version of unraid and see if that chipset has drivers in linux
  2. why not just run external sas to a sas expander in the apple xraid chassis from your current server. You could retain your current power supply and just use it as a storage enclosure. Attached a picture of how that can work
  3. If you are not transcoding then either one. I personally like ecc ram for unraid
  4. @olehj Yay that worked! I now see all 30+ drives after spinning them up and scanning. Now to reload the backup I made and sort them to their slots. Thank you for all your hard work! Enjoy some beer
  5. Yep everything is up to date. I backed up then deleted the database and it still shows the same drives from scan when I tried it checked with smartctl -all /dev/* and they all have different serial numbers, vendors, product, revision and logical unit ids I've attached 4 drives i checked with smart with all info (not worried since they are old). Two 10TB HGST sas drives and my two main 3.84TB ssd sas drives unraid smart data 6.6.22.txt
  6. start a new topic and don't jump on someone elses please. No for hardware raid and ZFS isn't natively supported yet. (should be for 6.11). Normal method would be to setup a xfs array with 1 or 2 parity drives and let unraid manage it
  7. If you can compile the driver under linux, Unraid would support it I think. Out of the box Unraid does not support fibrechannel. I think just TrueNas enterprise is the only thing I have used that has it. You are talking about very specific enterprise use gear that not many have access to Also you'd need to script to have the driver load up on every boot
  8. Do i need to do a reboot? I ran a force scan and it didn't find any new drives
  9. you can look up the server information/motherboard and see the form factor on the supermicro website to make sure
  10. should probably start a new hardware topic on m2 to sata boards and ask
  11. lsscsi -u -b output and lsscsi -g output attached along with smartctl output that was sent to txt files. SG29 is a sata drive that is working, SG30 is a SAS drive that is not smartctl-sg29.txt smartctl-sg30.txt
  12. yeah expander cards work just like a dumb network switch. Just input one sas cable in and then sas cable out to the rest from expander card. If you are not using any SAS drives you could use two cables in and one would work as a failover (with sas drives it tries to do dual mode sas which unraid doesn't support and you end up with drives listed twice with different addresses)
  13. None of my Sas drives show up. Attached lsscci -ug output. I know you havn't updated the plugin yet but just posting now that 6.10 is stable lsscsi ug output 5.21.22.txt
  14. This listing looks like the same card as the tri-mode based on comparing specs but i guess it could be a knockoff. - https://www.ebay.com/itm/284672380344?mkcid=16&mkevt=1&mkrid=711-127632-2357-0&ssspo=JnVW1VgjTbK&sssrc=2047675&ssuid=&widget_ver=artemis&media=COPY Another possible route to go would be to just use a 12g sas expander card and a much cheaper 12g sas card like a 9400-8i. I'm running single sas cable 12g into a backplane and still max out drive speeds just fine with tons of drives including 4 ssd sas and 3 sata sas
  15. yes. There is no way to interface with your drives without a sas card. The expander doesn't provide anything other than making more sas ports for you to connect stuff up with The expander card can just be put in whatever slot it fits in. The smaller the better as it literally is only getting power from the pcie slot. You don't need to use the external port for it at all. Just connect your sas card into whatever expander port is easiest. Sounds like you want to go sas internal 8087 to sas internal 8087
  16. That is a sas expander. It is powered by pcie but doesn't show up in post or anything as it is a dumb device using pcie just to power itself. You also need a sas card to go with it. Think of it like a network switch. You still need an ethernet card to interface with it. You run the sas cable or cables from pcie sas card into expander using any ports you want and from expander any still available ports out to the drives (unless the expander tells you which ones to use). You are going to need a pcie sas card that can be flashed to IT mode to go with it. Sas 6 are pretty cheap the m1015 are what a lot of us have run but you can check back through this topic to see what recommended devices are. I have a 9300-8e (Sas 12)
  17. Another option besides a tower, is to get a small form factor with a pcie slot and use external sas pcie card to connect to a diy das box or an enclosure with a sas backplane. HP S01, HP 290 come to mind because i know they have a full pcie slot but there are others. This is the best way I can think of to use a mini pc/small form factor and still have a ton of storage space directly attached. Your originally proposed hardware I don't think is going to work due to lack of the expansion slots
  18. +1 massive difference between copper or fiber
  19. Questions: 1.) SAS card with two or more ports. One port to first case, one port to ext sas adapter and then out to 2nd. 2nd case only has power supply, power switch and sas multilpier card and a sas external adapter 2.) SAS card/expanders/ external adapters. Can be sas or sata drives. Backplanes in case help 3.) supermicro stuff is high quality but really whatever deal you can find. They also have a 4U 36 drive one 4.) rack mounting hardware can be expensive and takes a while to ship if you actually use a server rack. Some of the better server grade drive enclosures only take 240v power
  20. Counterpoint of view to those above me: You might as well go v4 with that xeon. Lower idle power and not much more expensive. Add some kind of nvidia card for plex as well. If you dont' have a huge bunch of users the Nvidia Quadro P400 works great For 10 Gbit i've been using 10gbit sfp+ for years but harder to run than ethernet When you say case is not relevant, where are you putting all those drives? Hopefully you have a Das or Nas for them and connect through Sas 6 or 12 pcie card. I'm running a Xeon E5-2680 v2 , nvidia 1070, 10gbit sftp+ and Sas 12 to a das enclosure. Easily handles anything i throw at it and idles at 75 to 80 watts with the pc and pci cards alone which to me is nothing so. You should have better idle with v3/v4s. 160 watts with the 60 slot das enclosure powered on and no drives. To other members points: 1. True, qsv is more efficient but Nvidia cards idle at around 10 watts or less where it will spend most of it's time. Less than $8 a year. Lets round it up to $15 because it is transcoding part of the day 2. Passmark - the intel 12100f passmark is only 2000 more than my e5-2680 v2. Plenty of compute performance there 3. cheap ecc ram. I'm running 128GB of ddr3 ecc ram in four 32gb sticks and it was cheap 4. server grade hardware vs the consumer stuff most people are running. I ran into a lot more issues trying consumer grade stuff recently before returning to the original server stuff i had already. This system (wywinn opencompute 2 nodes have had zero issues since 2016)
  21. your ram is too low if you plan to run VMs
  22. New plugin user here. Are you still looking for users to submit their storage server info from your Jan post? I have a HGST 4U60 I could provide that info for Also none of my many SAS drives show up in scan or forced scan even though they show in the console smart stuff. However I'm on 6.10 RC5 so I'll just wait until it goes stable and you get a chance to update plugin per page 1
  23. 11th gen intel processor with the igpu would be ideal. 12th gen isn't supported yet but will be after the unraid kernel gets upgraded probably by release 6.11 at least. 12th gen seems to add about 20% more quicksink processing so not a huge deal. You could also use any of the 8th through 11th gen igpus
  24. I totally misread this thread yesterday. So there is a thing called force p2 state in the windows world for nvidia cards in the 1xxx series. I wonder if the same thing is going on here and that is why you can't get it out of that power level state. I assume you have tried running more transcode workers on it in tdarr and it is just stuck there?