CyberSkulls

Members
  • Posts

    195
  • Joined

  • Last visited

Everything posted by CyberSkulls

  1. Windows 10 does this to me sometimes as well on a fresh install. Don't know why but this has always worked for me. Rather than use the IP in explorer I used the server name. So let's assume it's still "tower". I go to \\tower and from then on it will show up under my network. Not sure why this happens though Sent from my iPhone using Tapatalk
  2. As others have pointed out the Arctic fans have a descent CFM in a open environment such as a large tower case with no obstructions. But as you have seen they have literally no static pressure meaning they just can't pull enough air through the drives/backplane. I use the stock fans running off my motherboards fan controller (ASUS M5A97 R2) and my drives idle in the upper 20's and under load hit 30-31C. I also left the stock PDU and upgraded to SQ power supplies. System stays cool and although not silent, it's quiet enough for me. Sent from my iPhone using Tapatalk
  3. Your excluded, you get nothing! I do however have some spare parity checks I would part with. Free shipping too. Sent from my iPhone using Tapatalk
  4. Not that you want to go the caselabs drive cage route but if your curious about those, I would recommend them as well. I'm putting one together right now using the Fractal Define S and (3) of the 4 drive cages in the front to start. So it will hold 12 drives to begin with and can hold another (3) sets of cages so another 12 drives down the road mounting them to the top where a typical radiator will go. So in reality, you could stuff 190-240TB into a mod tower chassis. Just some food for thought. Sent from my iPhone using Tapatalk
  5. I never had a problem with drives not spinning down with a mapped drive on Windows 10. It always worked as intended. Sent from my iPhone using Tapatalk
  6. I also wouldn't mind something like this but could turn into a licensing nightmare. As an example, and don't hate on me for this!, I don't currently use or care to use Docker or Virtualization in any way shape or form so I would have seen absolutely zero value in paying for an upgrade. Now with that being said, I do see value in dual parity and hopefully down the road, triple parity, larger disk arrays or multi array licenses. Or to be blunt, storage focused features. So I would defiantly pay for an upgrade for that. I'm all about storage when it comes to unRAID and is the only thing I use it for. I truly wish unRAID was broken up into multiple product lines as I would strictly want the storage portion of the OS. So how would one upgrade without paying for a bunch of features they don't want and would not use? That's a tough question and a tough sell. If LT ever went that direction, it would be interesting to see how it would take shape. Sent from my iPhone using Tapatalk
  7. I have some of the SAS1 backplanes and can put 23 disks larger than 2TB in it and it runs just fine. Now can I guarantee it will recognize every drive from every manufacturer >2TB? Nope, I can't. But I can with 100% certainty tell you that most (NOT all) who say they won't recognize drives >2TB either currently don't or never have owned that chassis with a SAS1 backplane. They simply regurgitate false information they read somewhere else. It's not officially supported from Supermicro but not a single one of my 846 chassis with the SAS1 backplanes have an issue seeing drives >2TB as long as the server isn't fully populated meaning 23 or less drives out of the 24 slots. I'm not saying I recommend doing this, just saying it is entirely possible. Now as to the speeds of SAS1 VS SAS2/3. That all depends on your use case. If you going for some massive virtualization lab running 24 SSD's, you might have a speed issue. If your running something like a media server, you won't even come close to saturating a SAS1 link anyway. Especially if your only running a gigabit network. Hope that helps **Edit. Echo what johnnie.black said. You will see a bottleneck during parity builds/checks if your filling the server completely up with drives on the SAS1 backplane. Sent from my iPhone using Tapatalk
  8. Docker is not enabled with a fresh install. You have to turn it on. Quit trolling. He might be referring to the isos share that does get created automagically on a fresh install. I don't use docker or VM's so I just delete the share. But he has a point, it should be up to the user to create it. It puts mine on the parity protected array. Not a huge deal but an annoyance for anyone who doesn't intend to use it. So I can see where he's coming from. Sent from my iPhone using Tapatalk
  9. The chassis did not like that idea and refused to cooperate with me. Apparently the SAS3 controllers HGST used in these chassis only allows control from one head unit. So running two unRAID servers connected to the same JBOD didn't work. Was hoping that would be a quick and dirty solution. I don't really want to run a plugin such as unassigned devices and have a crap load of shares so that's not an option. Only other option based on other posters in this thread (and I appreciated the feedback and help) would be to create a large cache only share/pool and not be able to use XFS. So that's not something I want to do at this point in time. So unfortunately now I'll have to move these chassis back to my old set up of running Windows Server and Stablebit Drive Pool & Drive Scanner and put my unRAID licenses in a drawer and hope for something to change in the future. Not the outcome I was hoping for I'll gladly move them back to unRAID if LT raises the artificial drive limitations, allows multiple Pro licenses to run multiple arrays on the same machine/hardware or creates a higher tier that allows for more drives. I would think any one of those three would be workable options. Sent from my iPhone using Tapatalk
  10. And that's what I kind of saw in a YouTube video that running the VM array was fine but stopping one caused some issues. Even when it came to two or more licensed GUID's showing in the system. I'll have to dig out a bunch of my craptastic Seagate 3TB drives, yes the dreaded ones that we all know and love to hate, and try to run two separate machines pulling off the same JBOD chassis and see what I get for results. Sent from my iPhone using Tapatalk
  11. IIRC, the pro licence supports 30 drives, 2 parity, and up to 36 cache drives in a pool with no limit on the # of unassigned devices. And I might have to look into unassigned devices again. Maybe I misunderstood how it works. I don't honestly care if they aren't part of the parity protected array since I also have full backups if you can set all those drives to show up as a single share. As an example I wouldn't want to have 30 different shares with movies in it. I would need them pooled together to show as a single share. Same with the cache devices, I've never used the cache for anything other than a cache drive so I'm unsure how it might or might not work for my situation. One thing I've never understood is if we're allowed to have 24 cache devices to begin with why not just remove the restriction on the array limit or increase it and allow 54 total rather than 2+28+24? Or like I've asked Tom, create another tier above Pro, call it premier or enterprise or whatever name sounds catchy. I don't mean that as a smart ass if it comes across that way, I am genuinely curious. Sent from my iPhone using Tapatalk
  12. So although this was a side tangent and I apologize for that, I hope you can see why running more than the allowed 28 data drives in a array would be vital to me. Or multiple licenses off the same head unit. I might have to try running two main U raid servers and have them both connected to one chassis splitting it right down the center as in the left 30 drives to Tower1 and the right 30 drives to Tower2. Just not sure if they would conflict with each other pulling off the same JBOD chassis. Worth a try I guess. Sent from my iPhone using Tapatalk
  13. Haha. These were brand new chassis bought by IXsystems (the FreeNAS guys) to harvest the 8TB Hitachi drives out of. The only way to buy those chassis is fully loaded with drives from Hitachi. Something like $25,000-$50,000/EA. And from what I understand, Hitachi had a big ass sale on these things so IXsystems bought a bunch, harvested the 8TB drives and dumped the chassis on eBay. So basically bought an entire brand new 60 bay chassis for about what it was going to cost me to upgrade one of my SM846 backplanes with a used SAS2 backplane from eBay in order to even run a 8TB red. So for me if was something fun to play with that also serves a purpose. Sent from my iPhone using Tapatalk
  14. I should also add that my main driver behind wanting this feature or multiple licenses on the same machine for multiple arrays is I couldn't help myself with the dirt cheap 60 bay SAS3 JBOD HGST chassis on eBay recently. They spoke to me and I had to have them would love to be able to run unRAID on them rather than switching everything back over to Windows and Drivepool. Sent from my iPhone using Tapatalk
  15. I actually agree with most of what you said there. To the point of LT feeling if your setting up a second array that you should have a second license. I would even agree with that. So on that note, being able to add a second license to that install thus allowing a second array to be ran would be a perfectly workable solution for me and to be honest, a completely fair solution. I own multiple pro licenses so I'd be all over that if a feature allowing multiple licenses was to be implemented. It would completely solve my current dilemma. As to drive capacities getting larger.. For me this will be offset by the media I store also getting larger. So when I started ripping my collection it was all DVD, then drives for larger and now I only rip Blu Rays. UHD encryption will eventually be broken and those of us that rip our own media will start ripping those. At 100GB a rip (from what I've read) it will end up being the same proportion I have now as far as putting a 25GB rip onto a 2TB drive vs a 100GB rip onto a 8TB drive. So this issue could self correct for the near term but those of us with large disk arrays will be in the same boat in a few years time yet again even with all 8TB or 10TB drives. I currently run (96) 2TB drives in my arrays in (4) SM846 chassis. Half are WD Reds and the other half (backup drives) are WD RE4's that for demoted to backup duty when I started buying the WD Reds. Anyone reading this question why I'm running 2TB drives. Keep in mind that when I started building media servers 2TB drives were these insanely huge drives and 4TB, 6TB, 8TB... drives were not even on the market. Now when it came time to expand since I already had the RE4's naturally I started replacing them with much newer, lower power and cooler running Reds. So if I had to do it all over again and got to start with all that money in my pocket would I go with 2TB Reds? Of course not. But to start over now and sell off all my drives at used eBay prices only to turn around and buy the same capacity worth of 8TB drives would cost me an extra $3,000 on top of what I would get for my used drives to yield the same capacity. As drives start to die, I will replace them with 8TB or 10TB drives. Sent from my iPhone using Tapatalk
  16. Multiple unRAID servers is what those of us who would like this feature most likely currently do. Atleast for me, it would be nice to be able to have them contained within one unit or one main server. As an example, some of us have jbod chassis that we would prefer to just link together through our SAS expanders rather than have multiple instances of unRAID running. I actually wanted to virtualize unRAID with multiple VM's but I've read and watched videos where unRAID isn't happy when you go to shut one down due to it seeing multiple licenses. Using your suggestion of just running multiple unRAID servers instead of fewer, larger servers.. You would be correct, school systems could buy a bunch of mini vans that set 7 people to get all the kids to school, as it would in fact get the job done as you suggest. But they instead choose to buy a bus that seats 50 Sent from my iPhone using Tapatalk
  17. Although I would venture to guess that the bulk of unRAID users run small to very small drive arrays, this would be beneficial to those of us with larger drive arrays. With the few of us being the minority, I don't see this as a likely feature in the near term. Notice I said drive array referring to the physical number of disks rather than the total size/TB they have in their array. Now with that being said, I would love to see multiple arrays, each with their own dual parity. It would bring unRAID up to date to where the other NAS OS's are and be basically multiple RAID6 arrays.
  18. I would say those are too damn hot. Everyone will have a different opinion but as an example, the one that is 59C translates to almost 140F. My drives idle in the 26-28C and under load might creep into the 30-31C range. I would defiantly look at your fan situation. Sent from my iPhone using Tapatalk
  19. I have to say upfront that I do like the Black Dynamix GUI but I wish it were updated for modern monitors that we all use today on our regular desktops while viewing the GUI. Seems like a lot of wasted space. Reminds me of watching a old full frame tv episode on a modern 1080P flat panel with the hideous black bars! Now with that out of the way, this style looks great. I'm gonna have to try it out on one of my servers. Now we just need a black version... Sent from my iPhone using Tapatalk
  20. Check the prices of used data center pulls on eBay. You'll be shocked how cheaply you can get a 24 bay with a SAS2 expander complete with caddy's, drive screws, air shroud and most likely the sas cables still in the chassis. Then again, just one persons opinion that for not by Norco then realized they don't support their products. Just read reviews on Newegg by all the buyers that found out Norco won't respond. Norcos idea of support is if you ignore the problem long enough it will go away. Not exactly the best business practice. But by then they have your money. Sent from my iPhone using Tapatalk
  21. And there in lies the problem. Their quality is shady at best and solid working units are very hit or miss. Some work fine and some are garbage. I didn't want to spend the money on a Supermicro so I went with Norco. Any guesses on what happened after I went with the Norco chassis? I ended up buying the Supermicro's anyway. Nobody can say what is right for you. If you want to roll the dice and gamble, I can't say I blame you but add up the extra costs associated with a Norco as far as buying a power supply, the additional HBA's needed rather than a SM chassis with a built in expander and you can probably save a little money by buying a SM and have a much much higher quality chassis. Sent from my iPhone using Tapatalk
  22. Their backplanes are still garbage. I bought a handful of their 2212 2U chassis about six months ago for a different project and had multiple backplanes fail. They only replaced one and won't respond to phone or email. So trust me, they are still garbage, so don't walk, run far far away from their junk. Sent from my iPhone using Tapatalk
  23. We can all hope that someday we have the option of running multiple arrays similar to Freenas. As an example, RaidZ2. Sent from my iPhone using Tapatalk
  24. One thing to keep in mind is that you could also copy files to your new unRAID box without parity initially at full speeds then once complete, add in your parity drives and build the parity. It will essentially double your write speed or cut your total time in half. I had to move roughly 40TB when I built my unRAID servers and chose to go this route. Sent from my iPhone using Tapatalk
  25. I've always been fascinated by these enclosures for some obscure reason. So we just need a massive increase the array device limit and we would be all set Sent from my iPhone using Tapatalk