thestraycat

Members
  • Posts

    202
  • Joined

  • Last visited

Everything posted by thestraycat

  1. @ Squid - Which CA am i after? had a quick look but couldn't find them... @ Jonnie.Black - If i were to invest in a semi managed switch that could support LAG/LACP would that be my only option for windows clients?
  2. I have a few quickie newb questions if anyone has a sec? 1. I have 6 NIC's and want to increase throughput. I dont yet have a managed switch. What are my options? 2. Is there a best practice for backing up my unraid USB pendrive? would really like to know recovery on pendrive failure was protected! 3. How are people backing up their docker app configs? Are people just cron/rsyncing there /mnt/cache/appdata folders to another disk? Any info would be awesome, i've trawled but couldnt find any solid advice on the above.
  3. +1 for iSCSI for mounting Hyperspin to windows clients... It's the only way to host content remotely to hyperspin...
  4. Hi Activ - Any idea why lazylibrarian's auto-updater dosn't update the application? i've tried forcing update through unraid but have a popup complaining about being 174 commits behind latest...when i hit update within lazylibrarian it says "updating..." but dosnt seem to be doing anything
  5. Hi guys, I'm running the latest container of linuxserver.io's sabnzbd... i just wanted to ask whether it was nomal that the "Enable Multicore Par2" option was greyed out?? I read earlier in the thread that the multicore par2 was included in this container? I've entered the "Extra PAR2 Parameters" as -t+ but was concerned that if the enable multicore par2 option was greyed out it wouldnt be using it?? I've checked in unraid's dashboard whether all cores are in use during a verify and unpack and they seem to be (although there not maxed) Is there an easy way to confirm?
  6. Will i need to enable "User Shares" to create a share called appdata on cache? I'm not seeing any option to create shares at all on the cache...
  7. Hi guys, so the method for making cache only folders seems to have evolved through unraid 4,5 and 6.. And being a new user on beta 6.2 and not being able to distinguish the correct method through the existing posts i thought id ask. Sorry if it does exist for 6.2 beta... I was looking for a cache-only option but couldn't find one for settings appdata to be persistant on cache and untouched by mover. Do i set "Use cache" and then add each disk to the exclude list? Do i use midnight commander to create the folder? or should i let docker create the folder to avoid future permissions issues etc? Wasn't sure whether prefixing folders with '.' was still a valid method? If someone can confirm that would be awesome! Just want to follow best practice in the early stages!
  8. i'll have a poke around... Thanks for the help!
  9. the powerdown script seems to have sorted my array stopping issue... any idea why?
  10. Hi everyone! new user here, hit a first hurdle. No user shares setup yet, I stop the array and I get: "user share - Retry unmounting user share(s)" in an endless cycle. Pretty rubbish. I can terminal to the unraid server and reboot it and it comes up fine. but dosnt allow me to stop the array without going into a loop. I'm running windows 10, chrome and ie and nothing seems to work. The only thing ive attempted is to set up docker, just went pro and so far this issue has completely stumped me. Any ideas??
  11. Hmmm... It seems to me that the peak ampeages would have to be considered surely?
  12. The RM750x has 7 molex connectors, not 2. >> Connectors yes - But all the connectors come off 2 wires from the modular PSU (It's the maximum ampeage of the cables i'm more concerned about tbh. Hence why i proposed it may be a better idea if i were to run 4 cables from the PSU (2 modular cables with molex ends, and 2 modular cables sata ends.) That way i can break the ampage up over 4 runs from the psu. If you need more, you also have 8 sata connectors, then you will need sata to molex adapters, NOT splitters. Example: http://www.newegg.com/Product/Product.aspx?Item=N82E16812706032&cm_re=sata_to_molex-_-12-706-032-_-Product >> Agreed. Adapters are the way forward. Assuming that i only need to use half of the molex's connectors on the back plane (if the other half are indeed for redundant PSU's) then it's going to be a bit annoying to split the load. It means that every molex on the backplane powers 4 drives. If i only have 4 modular cables then it means 2 of the modular cables will be responsible for powering 8 drives each... Is 8 drives pushing it over one modular cable from PSU? Quick maths would dictate that a cold startup that would be 16 amps... i dont think that any modular cables are rated for that sort of ampage are they? internet resources dictate that most psu modular cables are 18awg which equates to a sustained 9 amps on the cable?! Then you would have 15 molex connectors, if you actually need 16 molex connectors (and they are not just for redundant power supply, which you don't need since you only have 1 psu), you can step up to the RM850x, which has 8 molex connectors, and 10 sata connectors. With 10 sata-molex adapters, you would get a total of 18 molex connectors. >> Yup trying to get clarification ont he XC-RM424S to see whether it needs all 12 powered or not.
  13. Thanks again squid. I agree with everything you said. However if my back plane has 12 molex connectors feeding it. Would I be better utilising both molex and Sata cables from the psu?? Thus splitting the load over 4 cables?? Better than just using the satas surely.... Additionally: what is the most robust connector for ampage? Molex or SATA connectors (to anyone that may know!)
  14. Im a little confused. My chassis's backplane terminates to 12 molex connectors.. It is single rail and modular it has 2 cables with molex conectors and 2 cables with sata connectors. I was planning on running a 6 way molex splitter off each molex cable ... is there a benefit to running 2 x 6 way sata splitters of the 2 sata cables instead?
  15. Thanks Squid any reason to bite off the sata connectors and not the molex? surely if its a single rail psu it dosn't really matter does it? only reason i ask is that i've just ordered 2 of the splitters listed in the link a few posts up... under the impression it all comes off the same rail?
  16. Which leads me to my next question... PSU's like the Corsair RM750X that have 62.5A on the 12v line only ever terminate to 2 molex connectors.... That sounds worrying... If you look at the spec of 18awg wire (which most splitters seem so be) then the sustained maximum ampage is around 16 amps! so surely powering say my 12 molex connectors on my back plane from 2 molex connectors pulling potentially 48 amps (briefly) Is just asking for trouble?? Can anyone recommend any good splitters for getting around this issue? I know that the length of splitter comes into play and the fact that the power is a brief surge but then numbers dont seem to add up... 12v single rail (62amps!) ---> 2 molex connectors (Carrying all power accross them!) = FIRE?? I was looking at these splitters, which benefit from being short but are still only 18awg... thoughts from people running 24 bay cases? Whose the go to? http://www.moddiy.com/products/Top-Quality-18AWG-Molex-to-3-x-Molex-Cable-Splitter.html
  17. So 750w would cover it ok? any good recommendations for a strong 12v rail?
  18. Hi there! Yeah I was eyeing up a single rail Corsair 650 as people running 24 drives have had good results... my fear is that a lot of the disks are 7200 and i run 3 hba's....
  19. Hi guys, Quite new around here but have been popping in and out for the last 6 months... I've trawled through the existing posts on hardware advice and builds couldn't find a good solid recommendation for my disk\peripheral set up and was wondering whether anyone could help? This is the kit i currently have ready to be re-purposed: 1 x Case 424 chassis (Stock fans at the mo! ) 1 x Supermicro X9SCM-F Mobo 1 x Xeon E3-1245 (95w with iGPU) 3 x Dell H200 6gb/s (unflashed at the moment!) 4 x Molex 4-pin to 3-way splitter. 24 x 2TB Disks (Mix of 7200rpm and 5900rpm, mix of brands.) 1 x Samsung EVO 500gb Still needing: 6 x 8087 -> 8087 cables 1 x PSU --- My questions are! 1. Anyone have a good link to 6 x cheapy 0.5m 8087 to 8087 cables in the UK?? 2. What model\size power supply should i go with to accomodate for my kit list? I was drawn to the Corsair TX650 but a little concerned it may come up a little underpowered? Due to having a mix of disks and 3 x HBA's and a 95w CPU i'd like to future proof when i dump the 2tb disks to be able to cover any future disk upgrades. All the examples i could see were for 20+ green disks which seemed a little light on the power usage in comparison. If anyone can list off what they've had good results with going forward that would be super handy!! 3. If i were to flash the H200 HBA's with a firmware allowing staggered spin up could i hedge my bets with the 650w supply do you think? Or will the parity check get me?! 4. Any suggestions on upgrading the 12cm and 8cm fans on these rackmount chassis? Was hoping to move them all to PWM as their current 4 pin molex i believe? Any advice/pitfalls would be great at this stage in the build!