CyberSkulls

Members
  • Posts

    195
  • Joined

  • Last visited

Everything posted by CyberSkulls

  1. I really need to stop looking at hard drive temps in my chassis. My helium filled drives run 4-5C warmer than my 2TB and 4TB Reds but have never seen them get warmer than 32C after streaming for hours. They were under 30C but I swapped my fans in my 2U chassis for Arctic 80mm fans. They barely blow enough air to get out of their own way co pared to the San Ace stock fans but my Fractal R4 is now louder than my server so I can live with that Sent from my iPhone using Tapatalk
  2. I also just sent back a bunch of 8TB drives for the same vibration issues. And like you I just liberated a WD80EZZX. More vibration than my 2/4TB drives but way less than the Reds I sent back. I wasn't careful when I shucked it so it's a keeper now even with the slight vibration. To me, it's as if WD is sending out horribly unbalanced 8TB drives. I don't care how many platters they have, the vibration you and I both encountered shouldn't have ever left the factory. Sent from my iPhone using Tapatalk
  3. Just to make some think WTF to themselves, I run multiple 28 drive unRAID Pro machines and no parity disks in any of them. I have a couple spare disks per chassis I could throw in for parity just because, but I didn't buy unRAID for the parity. I was also a Windows/Drivepool guy coming from WHS2011. Since I didn't have parity on that, I grew accustomed to having a full identical backup in the event of a drive failure. So I chose unRAID as a Windows guy wanting a point and click interface, add random sized drives (all mine are the same size) if I wanted to and just generally very easy to expand. It made everything very simple. I also don't use Docker or VM's as mine are pure storage servers. So I've never had a reliability issue with them, they just work. Looks like your going to run parity, which is perfectly fine but there are more than a few of us that run no parity at all and are perfectly content. As others have said, it's a great run it how ya want type of system. Sent from my iPhone using Tapatalk
  4. I have to chime in on this too. I constantly read about "insert any OS here" and how drive rebuild puts so much stress on everything. I'm not trying to be rude but your talking about your drives reading and writing. This is what they are designed to do. If they can't handle it, they should be replaced with drives that can handle doing the job they were designed for. Think of it this way. You buy a new car and you only drive it down to the grocery store but never on vacation cause a longer drive than back and forth to store is just too much stress on the drivetrain. This is just a silly argument. I'm off my soapbox now Sent from my iPhone using Tapatalk
  5. This always annoyed the crap out of me too so when I do a clean install I make sure I go in and turn off VMs and Docker prior to adding a single disk so it doesn't create those shares. So maybe a solution is to have VM's and Docker set to off by default and let the end user enable them if they so choose rather than assuming everyone wants those shares by having them on by default.
  6. Most people don't ever think about the larger the drive the more platters n heads you have. They think it's some type of black magic that gives them more magical storage in the same physical space And that's a lot of future downloading you have to do! I held out from downloading as long as I could and continued ripping from the physical disk which I will continue to do to get older content. Sent from my iPhone using Tapatalk
  7. They aren't bad at all. I had the big chassis because the capacity I showed of 112TB is what is currently full, not my raw capacity and doesn't include parity or cache. Unfortunately I only had them half full as I was hitting the 28 array drive limit and didn't want cache only shares or a crap ton of unassigned devices. I kept everything spun down when not in use so the power difference between more smaller or fewer larger drives was negligible. And with chassis this big, I wasn't running out of drive bays anytime soon! I looked many times at switching to larger capacity drives as everyone gave me a hard time about it due to my spindle count But with that being said, by going to 8TB drives from 2TB drives I'm only reducing my platter/head count by 12.5%. My 2TB Reds have 2 platters and 4 heads. The 8TB Red's have 7 platters and 14 heads. I'm reducing my physical spindle count by 75% but I personally haven't ever had a drive fail from a bad spindle. It's always been from platter/head issues aka sector problems. So although I'm reducing physical spindle count, I'm putting 4 times the eggs in one basket and when I have a sector issue, and we all will, I'll have 7 platter and 14 heads die at once vs 2 platters and 4 heads. So it's a personal preference thing, one way isn't any better or safer than the other. I just want to get rid of my big chassis and my entire server rack, get down to a couple of tiny tower chassis and call it a day. That's exactly right. I'm still in the process of converting a lot of my DVD copies over to Blu Ray that were not available at the time of ripping. So that in itself will add probably another 10TB of data to my hoard.
  8. Gave that a try. Also rebooted the unRAID server and my Windows machine just to make sure. It does start out at 112 MB/s but quickly drops down into the 70 MB/s range and still continues to dip into the 40-50 MB/s range. The transfer speed graph on my Windows machine looks like a lie detector test from a compulsive liar! lol These disks are just acting weird. I had also never seen a controller like mine, the LSI SAS9201-16, hate a certain model of disk so bad. I've seen a bad backplane rack up UDMA errors but man some of my other 8TB reds racked up like 400 errors in minutes. Like this issue, the writes were fine. All the errors came on reads, and only reads. Again, those controllers are playing perfectly well with all my 2TB drives regardless of make/model.
  9. Was trying to upgrade some of my WD 2TB Reds with 8TB Reds and have noticed very slow read speeds. Write speeds saturate my gigabit connection like I would expect. Read speeds will typically hover around 55 MB/s. I have tried multiple cat6 cables, different ports on my switch, internal motherboard SATA rather than external SAS and the problem persists. The weird part is if I transfer from a 2TB Red, the read speeds go back to saturating the link. So it appears to be specific to these 8TB disks and unRAID. I know a lot of peeps here use them so I'm assuming I have missed a simple setting somewhere. I forgot to grab the diagnostics from earlier when I had multiple disks in the system, both 2TB and 8TB Reds so I put one back in, rebooted and ran a transfer, then grabbed the diagnostics. This transfer was better but typically hovered very low. Pay no attention to the UDMA CDC errors on this drive. That was from a different test and apparently my SAS cards (SAS9201-16e) do not like these drives at all. Within minutes some of my drives had in excess of 400 UDMA errors. So the diagnostics attached is with a 8TB Red running off my onboard SATA ports. edit: I should also mentioned I put this exact disk into my Windows machine and it transfers internally at 180 MB/s or so and across the network at 112 MB/s. So I honestly didn't feel it was a disk issue. bkp003-diagnostics-20170315-1952.zip
  10. I currently 56TB and a 56TB identical backup so 112TB total. I have just started downloading literally a week ago and will only download if it's the untouched Blu Ray ISO. Everything else I have is DVD/Blu Ray rips which I then convert to MKV. This was spread across 4 unRAID Pro servers but I am currently in the process of changing over to 8TB Red's so I will most likely be replacing my 60 bay HGST chassis with a couple of Fractal Define S towers. With drives as large as they are today, 12TB out and 14/16TB coming this year, it makes these big rack chassis no longer necessary for me personally. Sent from my iPhone using Tapatalk
  11. As far as drives larger than 10TB, HGST announced last year 12TB and 14TB drives would be coming this year. I've said this many many times over the past five years. Spinners aren't going anywhere and will continue to get larger. SSD's will eventually outpace spinners in raw size but they won't replace spinners till their prices come down. When you can buy a 10TB SSD for $400, I'll be interested. Till then, spinners will stay in my chassis. That's not going to happen for many years. Sent from my iPhone using Tapatalk
  12. BTW, where is this roadmap you speak of?? *edit. That is meant to be literal, not sarcastic. Sent from my iPhone using Tapatalk
  13. Im not familiar with that specific card so I would do a quick search to see if anyone else is using it with unRAID without issues. Other than that, yes, just plug in the card, plug in your external SAS cable from the card to the JBOD and your off to the races. Again not familiar with that card so if it's just a standard HBA rather than a raid card.. Sent from my iPhone using Tapatalk
  14. I will 2nd this. I also run my unRAID servers on disk shelves/JBOD chassis. As long as unRAID can see the HBA and therefore the disks, it doesn't care where your disks are located. I used SM 846 chassis in the past and now use HGST 4U60 JBOD chassis and unRAID could care less. I'm sure there are plenty of cards that don't work with unRAID but I have to be honest, I have never come across one that didn't work. Sent from my iPhone using Tapatalk
  15. I also never restart when adding a drive. I just stop the array, pop the new drive in wait a couple of seconds for unRAID to see it, assign it to a slot and off and running again. Sent from my iPhone using Tapatalk
  16. I bought mine with SAS1 backplanes since I was only running 2TB Reds. I bought a couple SAS2 backplanes dirt cheap on eBay shortly thereafter. I wasn't planning on changing my drives out and the prices went way up on the SAS2 backplanes so I swapped them back out and sold them. The profit I made on the SAS2 backplanes completely paid for my original chassis so figured why not. I keep all my drives spun down anyway so I wasn't concerned about the power usage of using "more" smaller capacity drives. I had tried to move up to 8TB Reds at one point but the ones I bought (new) were noisy as hell and honestly sounded unbalanced so I stuck with my 2TB Reds as they haven't given me a single issue. If I change drives in the future I'll either go for 10+ TB drives or wait a little bit till the prices come down on SSD's and move to an all SSD set up. I've still got a while on warranty for over half my 2TB drives so I'm in no hurry. Sent from my iPhone using Tapatalk
  17. The 900's are loud if your referring to the PWS-902-1R. I replaced mine with their super quiet models: PWS-920P-SQ and the fans never spin up as I don't put anywhere near what is considered a high load on them since I use them for pure storage servers. I removed the rear fans as they are screamers and run my (3) mid plane fans off the motherboard fan controller. So is it louder than my Fractal Define R4 desktop? Yes. But it's not anywhere near what I would consider as "loud". Sent from my iPhone using Tapatalk
  18. I would recommend the SM846 way way way over the Norco. Sorry for anyone reading this that has a Norco chassis but they are extremely low quality compared to SM chassis. I have (4) Norco 2212's with garbage backplanes that Norco won't even respond to emails/phone calls anymore and they are still under warranty. So they are in their boxes with plastic on them where they will stay till I scrap all four of them. Save yourself the headache and buy quality SM chassis. Sent from my iPhone using Tapatalk
  19. I'm sure someone will chime in and link you to the exact procedure from the wiki but the question that comes to mind is if you wanting to use them in a different system I would understand. If your looking to save power like you mentioned, why not set them to spin down after "xx" minutes and not have to worry with it at all? Sent from my iPhone using Tapatalk
  20. I was worried the 60 bay HGST was gonna be a pain to pull out. But honestly it slides like silk. I can pull it out with one finger. But I know what your getting at. I didn't need them but I couldn't pass up the price. I was about to upgrade a backplane on one of my SM 846 chassis when I came across the 60 bay. So I thought why buy a used SAS2 backplane when I can get an entire new SAS3 chassis for about the same price. I just couldn't help myself Sent from my iPhone using Tapatalk
  21. There were some cheap Chenbro chassis on eBay lately that held 45-48 drives, can't remember. I picked up some of the HGST 60 bay SAS3 chassis (4U60) a couple of months ago dirt cheap new in the box. I only mention those as they were substantially cheaper than the backup pods or whatever you want to call them. Sent from my iPhone using Tapatalk
  22. With 8TB and larger drives available now, I can't see why anyone would put that many in a single system with only 2 parity drives. Because not everyone has 8TB drives or even wants 8TB drives. To put this into perspective to replace my current capacity into 8 TB drives would cost me $7,680 (WD Reds). A higher limit license would be pennies compared to the cost of Drive replacement but would allow me to cut down the physical number of head units (motherboard/proc/ram..) that I currently run for unRAID and allow me to fully utilize my JBOD chassis. As to the parity drives. Not everyone uses parity drives either. I do on some of my systems and don't on others. From a drive replacement perspective, it's much faster for me to copy the missing files of a failed drive to a new drive over the network than it is to do a parity rebuild. Now that's not the only reason to use parity, I'm just giving you one example. And to take it a step further I've commented on parity sets or multiple arrays in the past. So let's assume it was left the way it is with two parity drives which is fine by me. Allow multiple pro keys per install. So two keys would allow two arrays of 28 data + 2 parity. If we were allowed to run multiple license keys, I would most likely run three keys per install consisting of three arrays of 18 data + 2 parity for a total of 60 drives in my JBOD chassis. I'm not using unRAID for the VM or Docker capability as I don't use either of those features. I run unRAID simply because it does what I could do in say Debian + Mergerfs but does it for me in a simple way as far as how it presents it to the end user and gives me a great GUI on top of that. To say a feature or license like this has no merit (not that you said this) would be the same thing as if I said the VM and Docker function should be removed because I personally don't use them, therefore there is no use case for them. That would be very short sighted on my part knowing that everyone here uses unRAID for their own reasons. So I simply choose not to use some features that don't apply to me and use the features that I want/need. Sent from my iPhone using Tapatalk
  23. It basically created a raid1 cache array. So yes, it's working properly. Sent from my iPhone using Tapatalk
  24. Incorrect. I believe what he is asking is that although 5TB is 5TB to you or me, when you get right down to the bits of each individual drive, they can be different. Meaning if he buys a drive from xyz manufacturer, it may in fact be smaller than all his others thus not allowing him to use it as a parity drive because it is no longer as large or larger than his largest data drive. Although I haven't run into this myself, it has been noted on the forums in the past. Sent from my iPhone using Tapatalk
  25. I agree. I also mapped my shares as network locations to get around this. Sent from my iPhone using Tapatalk