thestraycat

Members
  • Posts

    202
  • Joined

  • Last visited

Everything posted by thestraycat

  1. Can anyone confirm whether the Seagate 6TB SAS Drives (ST6000NM0034) are currently playing nicely (spin up/down correctly) with any of the LSI2008 controllers? Any one's confirmation either way would be amazing!
  2. Can anyone help? Can i add a line to my unraid config to issue the command "PWM FAN = 40" on startup? That would at least be a workaround....
  3. I've got everything set up as i want it with autofan and it works well for me on my Supermicro X9SCM-F for what it matters i also have the following plugins: IPMI (configured with fan control off) and SYSTEM TEMP (which sees my sensors and controllers) However, Everytime i have to reboot, i have to manually change out and put back the 'PWM FAN' value in the plugin to force the plugin talk to the controller and manage the speed. My BIOS fan controller is set to 'FULL' as i've read that the bios dosn't fight for control of PWM control if it's set like this. If i just reboot unraid and login, the array fans (that i manage with autofan) will be running at 100% until i do the disable/renable, detect process. Is it likely that the plugin needs to be forced to refresh it's values prior to starting after a fresh reboot on the X9 platform so that it fights for control of PWM from the BIOS? If so, could this be added? Secondly, Am i right in thinking the plugin dosnt display the highest disk temp in the unraid footnote? My autofan plugin simply displays: CPU (temp)/ Mainboard (temp)/ Array Fan (%) but the footnote dosnt tell me what the disk temp is! As this plugin adjusts fans based on disk temp would it not make sense to display the disk temp? I'd like to get to the bottom of what i need to do to avoid the re-enable/redetect every reboot before the plugin takes control. Ive tried uninstalling and reinstalling and manually clearing out all the old .cfg files for the plugin. But nothing has worked. Cheers for the hardwork this plugin was def needed. Hope someone can shed some light.
  4. Hi guys - Quick one, can anyone confiirm that they have this plugin changing fan speeds based on HDD temp on the Supermicro X9SCM-F? I know there's been a lot of questions around the Supermicro X9 boards and cant find clarification in this 132 page thread! lol. If so, can someone detail what fan header their using for CPU and DISK ARRAY fans and what fan mode (if any) they are running in the BIOS? It would be great appreciated.
  5. @uldise - These are I hear what your saying with the noctuas but they seem to be more in pc/gaming bracket in terms of performance i was comparing nidec/delta fans in the 80/120mm sizes that consume around 10 times as much power but have much larger static pressure figures. I'm wondering whether aerodynamically when you get into server grade fans whether 80mm starts becoming a better compromise of back pressure vs noise vs performance. @aburgesser - Can you give me an example of server grade fans where the 120mm fan out performs the 80mm in the same price range or product line?
  6. Hi guys, replacing the midplane fans on my 24 bay "x-case 424S" which is a 24bay norco-esque copy.... Noticed that 80mm fans are generally better at static pressure than the 120mm fans so was thinking of putting in 80mm fans as case runs hot. However, the very top and very bottom disk bays wont be directly in front of the 80mm fan as it's smaller. noticed that newer 24 bay enclosures come with 80mm as standard do you think it'll be more performant? From what i can see because 24 drives takes up ALL the space at the front of any 24 bay 4u server in theory it dosnt seem to be any less effiicient than buying a new case and running the stock 80mm fans. Opinions?
  7. Noticed that 80mm fans always seem to have been static pressure than 120mm fans ... Might explain why it looks like Supermicro for example have run their most recent 4U 24 bay chassis as 3 x 80mm midplane + 2 x 80mm rear exhaust setup... over 120mm midplane... Thoughts?
  8. hi guys, I run a norco 4224 esque case, it's the XCase 424S 24 bay server running 24 x 2tb 7.2k disks. I've been running hot for a while and want to fix it up. I've been looking for the best bang for buck high static pressure fans i can find. Was looking at the Alpha Cool ES 120mm series or the Supermicro AFB1212SHE both 4pin PWM. Does anyone have any good recommendations for high static pressure fans in 120mm and 80mm sizes? I looked at the Noctua industrials but there pricier and not as effective due to lower back pressure. dont want to really exceed 1.67amps per fan preferably. Any experiences?
  9. Bought an EVGA SuperNova 1300w G2 in the end. Yup this is what i was already thinking. Annoyingly for some brands these 6pin pci-e > molex/sata's arnt even compatible between their own PSU models! I suppose i was kinda of asking for anyone that may know whether EVGA had standardized the use of their own 6pin pcie across 'most' of their PSU models. I've since checked and there one of the better brands for it. I wanted to buy from EVGA but unfortunately they have none for sale on the site! AND they have limited them to like 2 per household and i need 4!!! lol. I could make them, but tbh, it's easier and safer to buy prebuilt because this whole thread started due to my instability issues in the first place. I want a good test bed with a PSU with prebuilt cables so if it dosnt sort my issues, i can safely say: "At least it's not the PSU!" However, i do still have the dilemma of wanting to buy official EVGA molex cables and no where to really find them... other than a random guy on ebay who could simply be selling £4 chinese ones off as EVGA compatible... Sigh.
  10. @Vr2Io - Do you think you could use the SATA1-4 PCI-e 6 pin slots for 4 x 4 molex cables?
  11. All good points. Like you said though the only downside is one of my peripheral cables will be powering 2 rows of drives (8 disks) thats 26amps on a single cable...
  12. Yup. I agree. So out of the PSU's i listed above what would you pick? I'm leaning towards the EVGA Supernova 1200w or 1300w
  13. Yup. Why not all in Molex? I need them all in molex... I understand the PSU wont come with that many but will 4 more wires of molex (including the 2 it comes with) to go to the 1200w psu... its what the midplane takes. Are you just warning me that the PSU's dont come with 6 x Molex modular cables?
  14. Sounds like you might be fine then. Not an option for me though Stuck with ATX form factor PSU's
  15. The devils in the detail. I suppose in your case it would all depend on how your distributing the power between the 2 psu's. If one PSU is responsible on 1 wire to power 4 disks. Your in the same position as me if your disks pull 3.3amps at cold. It would be good to see exactly how everythings wired to be sure. But either way it wont help my conundrum if you've got 2 PSU's. lol.
  16. thats what i was thinking above. Possibly. Problem is most PSU's <850w only have 4 slots for sticking in peripheral cables ... hence my move to a 1000-1200w psu to get 6 slots for peripheral cables. Much better that they pull 25w! But if your spanning those molex's too thinly you may end up in the same position as me as your PSU degrades really depends on how many of those banks of 4 disks each is powering i suppose.
  17. @Vr2Io @Ford Prefect The only way any of this makes sense is if the maximum ampeage of any single cable coming from the PSU with molex or sata on it can withstand 15amps of current for a short period of time.. Still sketchy at best though. This is why the only way i can think of doing this is to get a PSU of around 1200w that can take my 79amps of potential 24 disks startup current and mobo/cpu/hba loads and distribute the power to 6 seperate peripheral cables (with a few molex on each cable) from the PSU to keep the wire loads down to 12.2amps a wire. Only the larger wattage PSU seems to have 6 x peripheral modular connector slots.
  18. @Vr2Io -Are your disks individually powered? 1 disk = 1 molex connector or are you using a mid plane? It's the midplane that is the issue. @Ford Prefect - Ok. Mine has the same, but the 2nd row of molex connectors on each row of 4 disks are mainly for redundant power supplies, I'm sure you can power them from the same PSU but you would still run out of single wires coming from your PSU without using molex connectors from a previous row to power the next row, which in turn would bring the ampeage up over the 15amp rating for a single wire back to the PSU. Your way would only work for the 1st or 2nd row....after than it's over the ampage of the cable... So as only the 2nd picture "!" is the most efficient. You still need 6 seperate cables back to the PSU to complete the midplane. And i'm assuming your not using 6 to power it?
  19. >>Wouldnt this mean on your 16 drives though that you have 1 molex connector per 4 drives? Depending on the cold starting draw of your disks that would put you in the same camp as me: 16awg cable = 15amp Molex connector = 11amp 4 x 3.3amp disks on a molex connector = 13.2amps Obviously if your disks wernt as power hungry as mine you'd be ok? >> Rather than modding the cables couldnt i just order up additional pci-e 6 pin > Molex cables from EVGA? They give you 1 or 2 with the PSU anyway and it has 6 x 6pin peripheral ports on the PSU... wouldnt that work? >> Woulnd't this be the same as what i mentioned above. Rather than DIY cables, just order spares from the company of the ones you need? (In my case i'd order 4 extra cables with Molex connectors on them?
  20. Hi guys, A nice simple and easy one here. Assuming all your drives are spun down, and the scheduler decies its time for a parity check... Does unraid stagger the disks up in batches. Or does it issue the command all at once?? The reason i ask is that i have 24 drives and speculate that i have PSU issues. I've discussed it in more detail over on this thread: however, it would make sense if unraid were to do a pre-check on prior to starting a parity sync by issueing each drive to spin up individually a second or so apart waiting for a minute or so for all drives to be up, and then starting the parity check when all drives have spun up to avoid an insurge of current for disks that were previously spun down to come up at once. I assumed this may actually be already the case, but all the threads that ive found mention that unraid dosnt have the potential to stagger spinning drives up so all drives will come up at once at the beginning of a parity check. Can someone clear this up for me?
  21. @Vr2Io @Ford Prefect Yeah i get that. Thanks for confirming. An issue im trying to wrap my head round is my midplane used 6 x molex to power 24 disks. From the PSU most have 2 seperate lines of 3 or 4 molex cables at most. In escence in a case like mine, with drives like mine (3.3amp cold start) isnt that potentially: 12 disks x 3.3amp = 39.6amp (480w) for the first molex cable 12 disks x 3.3amp = 39.6amp (480w) for the 2nd molex cable And if molex connectors are rated at 54w each and theres 3 of them per wire the max draw over all 3 combined shouldnt exceed 162w! I understand this is a burst of current and they can potentially take it on the short term... but still.... It seems realistic that the configuration for any of these sort of 4u cases with 6 molex and 24 drives is impossible due to exceeding the maximum ampeage a 4 pin molex can take (54w) In escence: *If 16awg cable is only rated at 15amps *and a single molex is rated at 11amps *but the drives cold start at 13.2amps on the midplame then it seems physically impossible for me to power my midplane! Unless the molex can withstand the extra 2.2 amps of draw for a short time? I drew this crude pic to demonstrate what i mean: *A Powersupply that has over 85amps on the 12v rail for headroom of onboard and pcie cards (1000w+) *A powersupply that is single rail *A powersupply that has the potential to run 6 seperate cables with a molex connector on each of them? Although this dosnt explain how the molex connector on each of the 6 cables can take 150w / 12.2amps of current for the time it takes to spin that bank of drives up! however lots of people run them, and even the most energy efficient drives probably take around 2amps to spin up... The best fit i can find is something like this EVGA SuperNOVA 1300w G2 what do you think? Obviously, new case, new psu new disks would be amazing, but it's just not an option for me right now. What PSU model are you using @Vr2Io??
  22. Thanks. Yeah no bulging capacitors but i'll take a pic and upload it later. My case is the 550mm XCase RM424S. So dosnt have room for 2 x PSU's so i'll have to stick to a single large PSU... I've been thinking of upping the size up to the EVGA Supernova 1000 or 1300 models... This gives me around 83amp (1000w)-> 108amps (1300w) on the 12v rail and around 25 on the 5v... Do you think this should hold up well for my system? I think going forward rather than expand further i'll simply start the upgrading process of moving these power hungry 2tb disks over to larger and newer lower spin up draw disks as a battle plan... I do however need to get the server back up prior to migrating the disks over to more energy efficient models...
  23. Hmmm i've run it for many years on the CX750M - I wonder how i've got away with it for so long during partiy checks? I'd assue it has something to do with the use of 5v and 12v ampage being utilized by the spin up process... but it dosnt make sense that this should be pulling 800w on just the disks alone + 3 x HBAs, 1 x 10gb nic, 2 x SSD, MOBO + CPU during a parity check... That should be enough to push my cx750m's peak surely...
  24. Hey guys, I've got a modest setup that ive been running for a few years and have always been plagued with the array dropping off after a few weeks, nothing in the logs so i suspect it's always been a PSU issue as nothing else is coming up as an issue. I run the server with: 22 x Hitachi 2TB 7200rpm drives (2 for parity) they peak at around 11w (0.9amps according to this link) (https://content.it4profit.com/itshop/itemcard_cs.jsp?ITEM=91012054645819599&THEME=asbis&LANG=en) 3 x Dell H200 HBA's, (14w max each) 1 x 10gb PCIe nic (14w max) 2 x Samsung EVO 500GB SSD's (<5w) MOBO: Supermicro X9SCM-F (<20w) CPU: E3-1240v2 (69w TDP) 24 bay server Midplane (<20w) PSU: Corsair CX750M (Single rail) =466w max draw. About 37 amps of 12v i believe? (if i used 24 disks rather than 22 = 439w) So as mentioned i'm in the market for a replacement PSU. Based on the hardware above what would you recommend if anyone has a similar setup could you let me know what PSU's you've had good luck with as a starting point? Any advice would be great. Not looking for the latest and greatest just something solid and reliable that will allow me to populate my other 2 drive bays at a later date without worry for what it matters my midplane uses 6 x MOLEX connectors to power the drives) Do i even need too replace my CX750M? Is there an easy way to tell? I've recently removed it and it was pretty dusty so just in the process of cleaning it out... Any chance that using unraid as a persistant syslog server would help to capture anything psu related? What should i be looking for in the logs to indicate it? (https://wiki.unraid.net/Manual/Troubleshooting#Persistent_Logs_.28Syslog_server.29) Thinking of the following PSU models, anyone got any experience in a similar rig as they have icue link for power monitoring and are single rail switchable. RM750i RM850i HX750i HX850i