JimPhreak

Members
  • Posts

    723
  • Joined

  • Last visited

Everything posted by JimPhreak

  1. A "home" network is very subjective at this point. I for one am not running a 4-node vSAN cluster on my home network which requires 10Gb. Is this the norm? Certainly not. But I do expect home users to continue to push the envelope with regard to their home networks especially as cord cutting and home automation becomes more and more popular. I can tell you this, now that I've been running 10Gb on my home network, everytime I'm at work my file transfers feel SOOOOO slow. You get used to the speed quickly. It's just like when you buy a new TV. There's a reason they say to always buy the biggest TV you can afford that doesn't look ridiculous in the room, it's because you get used to the size so quickly.
  2. Do you need the power of an E5? The new wave of Xeon D boards (I just picked up 3) come with dual SFP+ and a build in LSI controller capable of connecting 16 drives to via 4 onboard SAS connectors. Furthermore the D-1587 offers 16-cores and competes very nicely with many of the E5 chips at 65w.
  3. In small quantities I can understand fearing the warranty status of the external drives. But for those looking to stock their large NAS' with these, the choice isn't as simple. If you have $2,000 to spend on drives, you can buy 8 of the externals. You can't even buy 6 of the "full warranty" drives for that. So for anyone who needs 6 drives for example, you're getting 2 spare drives for less money then the person just buying 6 full warranty drives.
  4. The Seagates are still a far better value for UnRAID setups at $220 per drive thus $27.50 per TB IMO. I have 8 of them in each of my servers (main and backup) and they have been working great.
  5. Now available on Amazon. http://www.amazon.com/Red-8TB-Hard-Disk-Drive/dp/B01BYLY4DM/ref=sr_1_1?ie=UTF8&qid=1458250651&sr=8-1&keywords=wd+red+8tb
  6. This is really really such a big thing... The average raid set wil be TOAST if more then an allowed amount of drives fail.. That terrifies me... Its why I run RAID10 at work with spare disks on a shelf.. The simple ability to pull drives out of my enclosure and just put them in a windows box and read them is such a big selling point for me.. But... to each his own.. Some people might be extremy scared of bitrot and not worry about the other thing... To me personally that is weird, like having earthquake insurance for your house but building fires indoors on your table.. Agreed. I have a backup of my important data (non-media) to a local USB external drive on the premises. I also have a cloud backup of that same important data. I also have my entire UnRAID array (56TB) mirrored to a second UnRAID array off-site via VPN connection. To me, having a good backup plan puts my mind at ease much more than bitrot protection does.
  7. Not really... just like a smartphone that doesn't make reliable calls... when it's prime directive is to be a PHONE, the rest of the stuff is just a toy. But who are you to determine what UnRAID's "prime directive" is? Sounds like you decided that in your own head. Oh my, let's see..... yeah, that'd be the CUSTOMER Don't tell my ya'll learned nothing from the tech bubble at the turn of the century! You're not making sense. What have YOU (the customer) decided UnRAID's primary purpose is?
  8. Not really... just like a smartphone that doesn't make reliable calls... when it's prime directive is to be a PHONE, the rest of the stuff is just a toy. But who are you to determine what UnRAID's "prime directive" is? Sounds like you decided that in your own head.
  9. Not sure about the 8TB version but they have a 16TB Duo they just released for pre-order (same ship date as the 8TB WD Red drives themselves) and you can see on the Overview tab that they use the Red Drives. My Book Duo OK. Thanks. It's just that you said in your OP, which is why I asked. Yea that was just an assumption based on what I had seen on another forum but it hasn't been confirmed.
  10. Not sure about the 8TB version but they have a 16TB Duo they just released for pre-order (same ship date as the 8TB WD Red drives themselves) and you can see on the Overview tab that they use the Red Drives. My Book Duo
  11. There are no mini-itx boards that support E5 v2 chips but there is one that supports v3 chips if you were able to make a trade for the newer chip. http://www.asrockrack.com/general/productdetail.asp?Model=EPC612D4I#Specifications
  12. Just an FYI for anyone about to build a new server that WD has finally released their Red series drives in 8TB capacity. You can pre-order them here (they ship on 4/5 apparently) for $350. However you can also purchase their new 8TB My Book External Drive for $250 which has an 8TB Red drive in it. However my assumption is you would not get warranty support on the drive if removed from the external enclosure.
  13. Does anyone know how (if it's possible) to startup a previously installed docker (mainly Linuxserver.io dockers) without Internet access?
  14. How does one download a smart report for a failed drive? When I try it the report just reads the following: "Smartctl open device: /dev/sdf failed: No such device"
  15. Hard to say because Intel hasn't released it's prices for most of the wave-2 Xeon D CPUs yet but I expect that info to be released soon. These boards should be available in mid-to-late March from what I'm hearing. Whenever I hear anything I'll report back.
  16. In the next month I'll be upgrading/converging my storage server into a 2-in-1 storage server running UnRAID in one VM for my bulk media and Napp-it with ZFS in another for my VMs and dockers. I'm just waiting for this motherboard to be released for sale which has an onboard LSI controller that can connect 16 drives as well as dual 10gig SFP+ ports. So I won't be using the M1015 in my main server anymore going forward (still will in my backup server but haven't had any drive failure issues in that one yet).
  17. Thanks for the info steve. The fact that you have tried all of that to no avail doesn't leave me optimistic. Sure I can try not spinning my drives down but that's not really a solution to me considering having the ability to spin my drives down is one of the big reasons why I use UnRAID over other solutions.
  18. Power supply is less than 6 months old and is pretty solid. http://www.newegg.com/Product/Product.aspx?Item=N82E16817151124 OK, I had a look at this PS. I think it is a bit quick to say that it adequate for the job. It is only a 450W unit. While the current rating on the 12V bus is 37A (444W), I doubt if it can ever supply that max current on that bus as MB, CPU, fans, etc. will also be drawing power which must be accounted for in that 450W maximum. You did not provide any other information on those components or the the other HD's. Remember that when hard drives are spun-up they require a high initial starting current and there are times when all of the drives are required to spin up at the same time. If that PS goes into power limit mode, most likely it is going to effect every output voltage coming out. You can find more on PS's in this thread: http://lime-technology.com/forum/index.php?topic=12219.0 This is the board I'm using in addition to the 12 disks. http://www.supermicro.com/products/motherboard/Xeon/D/X10SDV-TLN4F.cfm I haven't updated the M1015 firmware in some time and never have on the RES2SV240 since I bought it. Can I update both from the UnRAID command line? You can upgrade the RES2SV240 from the unRAID command line I've done it that way. I've always done the M1015 from a dos boot flash but there are other methods including UEFI. If you have cross flashed it to be an LSI 9211-8i then it looks like it would work based on the firmware downloads from here: http://www.avagotech.com/products/server-storage/host-bus-adapters/sas-9211-8i#downloads But I would see if someone else can confirm it since I've only done it from a dos boot flash. Thanks Bob, I will try and update both when I can and see if that helps.
  19. I haven't updated the M1015 firmware in some time and never have on the RES2SV240 since I bought it. Can I update both from the UnRAID command line?
  20. Power supply is less than 6 months old and is pretty solid. http://www.newegg.com/Product/Product.aspx?Item=N82E16817151124 I can maybe swap the M1015's between my main and backup server and see if that makes any difference. In the meantime, how can I re-enable the failed disk without having to do a rebuild?
  21. I have a backup server but that server is fully populated with drives so there is no room to add an additional drive at this time. I'm confused what triggered these errors though as I don't believe I had any data written to my array since late last night. You can even see that last mover ran without moving anything. Feb 28 12:00:01 SPE-UNRAID logger: mover started Feb 28 12:00:01 SPE-UNRAID logger: skipping "Docker" Feb 28 12:00:01 SPE-UNRAID logger: skipping "Downloads" Feb 28 12:00:01 SPE-UNRAID logger: skipping "vdisks" Feb 28 12:00:01 SPE-UNRAID logger: mover finished Feb 28 12:45:52 SPE-UNRAID kernel: mdcmd (260): spindown 3 Feb 28 13:16:08 SPE-UNRAID kernel: mdcmd (261): spindown 1 Feb 28 13:59:00 SPE-UNRAID php: /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker 'restart' 'unifi' Feb 28 13:59:01 SPE-UNRAID kernel: vethb1e47cc: renamed from eth0 Feb 28 13:59:01 SPE-UNRAID kernel: docker0: port 8(veth7819134) entered disabled state Feb 28 13:59:01 SPE-UNRAID kernel: docker0: port 8(veth7819134) entered disabled state Feb 28 13:59:01 SPE-UNRAID avahi-daemon[5487]: Withdrawing workstation service for vethb1e47cc. Feb 28 13:59:01 SPE-UNRAID avahi-daemon[5487]: Withdrawing workstation service for veth7819134. Feb 28 13:59:01 SPE-UNRAID kernel: device veth7819134 left promiscuous mode Feb 28 13:59:01 SPE-UNRAID kernel: docker0: port 8(veth7819134) entered disabled state Feb 28 13:59:01 SPE-UNRAID kernel: device veth87a37ae entered promiscuous mode Feb 28 13:59:01 SPE-UNRAID kernel: docker0: port 8(veth87a37ae) entered forwarding state Feb 28 13:59:01 SPE-UNRAID kernel: docker0: port 8(veth87a37ae) entered forwarding state Feb 28 13:59:01 SPE-UNRAID avahi-daemon[5487]: Withdrawing workstation service for vethc67a6bd. Feb 28 13:59:01 SPE-UNRAID kernel: docker0: port 8(veth87a37ae) entered disabled state Feb 28 13:59:01 SPE-UNRAID kernel: eth0: renamed from vethc67a6bd Feb 28 13:59:01 SPE-UNRAID kernel: docker0: port 8(veth87a37ae) entered forwarding state Feb 28 13:59:01 SPE-UNRAID kernel: docker0: port 8(veth87a37ae) entered forwarding state Feb 28 13:59:05 SPE-UNRAID emhttp: cmd: /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker logs --tail=350 -f unifi Feb 28 13:59:16 SPE-UNRAID kernel: docker0: port 8(veth87a37ae) entered forwarding state Feb 28 14:01:29 SPE-UNRAID kernel: sd 2:0:5:0: [sdj] Synchronizing SCSI cache Feb 28 14:01:29 SPE-UNRAID kernel: sd 2:0:5:0: [sdj] UNKNOWN(0x2003) Result: hostbyte=0x01 driverbyte=0x00 Feb 28 14:01:29 SPE-UNRAID kernel: sd 2:0:5:0: [sdj] CDB: opcode=0x88 88 00 00 00 00 00 5f 02 71 b0 00 00 00 08 00 00 Feb 28 14:01:29 SPE-UNRAID kernel: blk_update_request: I/O error, dev sdj, sector 1593995696 Feb 28 14:01:29 SPE-UNRAID kernel: md: disk5 read error, sector=1593995632 Feb 28 14:01:29 SPE-UNRAID kernel: sd 2:0:5:0: [sdj] Synchronize Cache(10) failed: Result: hostbyte=0x01 driverbyte=0x00 Feb 28 14:01:29 SPE-UNRAID kernel: mpt2sas0: removing handle(0x000f), sas_addr(0x5001e677b7db5fed) Feb 28 14:01:40 SPE-UNRAID kernel: md: disk5 write error, sector=1593995632 Feb 28 14:01:40 SPE-UNRAID kernel: md: recovery thread woken up ... Feb 28 14:01:40 SPE-UNRAID kernel: write_file: write error 4 Feb 28 14:01:40 SPE-UNRAID kernel: md: could not write superblock from /boot/config/super.dat Feb 28 14:01:40 SPE-UNRAID kernel: md: recovery thread has nothing to resync Feb 28 14:01:40 SPE-UNRAID kernel: scsi 2:0:14:0: Direct-Access ATA WDC WD30EFRX-68E 0A82 PQ: 0 ANSI: 6 Feb 28 14:01:40 SPE-UNRAID kernel: scsi 2:0:14:0: SATA: handle(0x000f), sas_addr(0x5001e677b7db5fed), phy(13), device_name(0x0000000000000000) Feb 28 14:01:40 SPE-UNRAID kernel: scsi 2:0:14:0: SATA: enclosure_logical_id(0x5001e677b7db5fff), slot(13) Feb 28 14:01:40 SPE-UNRAID kernel: scsi 2:0:14:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y) Feb 28 14:01:40 SPE-UNRAID kernel: sd 2:0:14:0: Attached scsi generic sg9 type 0 Feb 28 14:01:40 SPE-UNRAID kernel: sd 2:0:14:0: [sdl] 5860533168 512-byte logical blocks: (3.00 TB/2.72 TiB) Feb 28 14:01:40 SPE-UNRAID kernel: sd 2:0:14:0: [sdl] 4096-byte physical blocks Feb 28 14:01:40 SPE-UNRAID kernel: sd 2:0:14:0: [sdl] Write Protect is off Feb 28 14:01:40 SPE-UNRAID kernel: sd 2:0:14:0: [sdl] Mode Sense: 7f 00 10 08 Feb 28 14:01:40 SPE-UNRAID kernel: sd 2:0:14:0: [sdl] Write cache: enabled, read cache: enabled, supports DPO and FUA Feb 28 14:01:40 SPE-UNRAID kernel: sdl: sdl1 Feb 28 14:01:40 SPE-UNRAID kernel: sd 2:0:14:0: [sdl] Attached SCSI disk Feb 28 14:02:02 SPE-UNRAID sSMTP[9523]: Creating SSL connection to host Feb 28 14:02:02 SPE-UNRAID sSMTP[9523]: SSL connection using ECDHE-RSA-AES128-GCM-SHA256
  22. I don't believe any of the disks are bad though. None have any smart errors that I can see and they passed short and long smart tests. I also tested them with WD Data Lifeguard and all passed. After I replaced the first 2 failed disks, I then ran 3 preclears on them as well and they completed those runs fine.
  23. I have 12 disks attached to the controller (8 in the array, 4 in cache pool). I only have 6 onboard SATA ports.
  24. Multiple drives on the same server. Just not sure how to go about isolating the issue. Plus the disk failures have been weeks apart to so it's hard to pinpoint.