jschwartz

Members
  • Posts

    22
  • Joined

  • Last visited

Everything posted by jschwartz

  1. I'm running ESXI 5.5 with unRAID 5 and a ZFS VM to provide datastore storage. Setup a few years ago when SSD's were painfully expensive. Given how cheap SSD's are now, I'm considering losing the Napp-it install, and throwing a couple of biggish SSD's into a cache pool... using that as an nfs datastore... but I havent seen anyone on the forums mention doing that successfully (until now). What is the speed / stability like for your nfs shares? I'm hoping no re-emergence of the old 'stale file handle' issue in nfs in unRAID 6? The other option is just to assign the HBA to ESXI instead of ZFS, and use the SSD's directly in ESXI. I just like leaving some of the SSD space in unRAID as cache.
  2. Is this in unRAID 6? I've got unRAID5 and can see the drive just fine as a SCSI, ESXI 5.5. Wondering if I would need to recreate it as IDE for 6 when I upgrade. (I also upgrade over a share, which is terribly convenient - I'd hate to give it up.)
  3. My ZFS VM is outgrowing its britches (upgrading drives in quads is a pain, I want that 6GB ram back!), so I was considering using mirrored disks in a BTRFS cache pool (some SSD, some spinners) as datastores for other VM's shared via NFS mount on unRaid. I also keep my photos on the ZFS filesystem (faster speed, bitrot prevention) This data would not be copied to the Array, just left on the cache pool drives as storage. I currently do this via napp-it/ZFS VM, but I'd love to 'downsize' and use unRaid for all my storage. Does anyone have any experience with NFS shares in unRaid 6? I tried this in unRaid5, and the NFS shares proved to be a bit flakey at times. (Reconnect issues, stale file handles, etc.) Also, how does the speed look for BTRFS cache pools? Anyone have any benchmarks with SSD vs spinners? (I didnt see any posted of late.) I'm still on unRaid 5, but as the 6 RC's start trickling out, it seems like a good time to investigate!
  4. Mine has 4GB RAM and 2 cores, and I have not had any issues whatsoever. Though I run only unRaid in it's VM, everything else is run in other VM's.
  5. http://www.newegg.com/Product/Product.aspx?Item=N82E16822149396&Tpk=N82E16822149396
  6. Have you tried removing the rocketraid temporarily, and seeing if there is a conflict with it?
  7. Its the Intel RES2SV240. I use it as 2 in / 4 out for essentially full speed spinner drive bandwidth (since the 2308 is pci 3.0 at 8 lanes, if my math is correct, I wouldn't be limiting any drives speed even on a full parity sync). The 8 other bays in the case are piped through the m1015 to Napp-it. SSD hanging off the motherboard SATA for VM's The last firmware for the expander was July of last year, the card came with the latest. I haven't identified the culprit yet, since I don't have an extra reverse breakout cable or backplane. Everything seems to be working perfectly with the motherboard though, rebuild rate was intensely fast.
  8. Looks like a backplane or cabling issue. Different slot, rebuilding no errors!
  9. Moved everything over, it recognized all my 3TB drives, including WD Red. I am running into some strange syslog messages, that I havent seen before in my travails. If anyone has any thoughts, I'm all ears. I've got 10 drives + parity hooked up to an intel expander. That is attached to a pair of reverse breakout cables from the LSI 2308 on the MB. (Running in an unRAID 5.02 setup in a VM on ESXI) I've also got a ZFS setup hanging off a m1015 in another VM. [pre]Nov 22 22:16:08 tower kernel: mpt2sas0: log_info(0x31110d01): originator(PL), code(0x11), sub_code(0x0d01) Nov 22 22:16:11 tower last message repeated 2 times Nov 22 22:16:26 tower kernel: mpt2sas0: log_info(0x31111000): originator(PL), code(0x11), sub_code(0x1000) Nov 22 22:16:31 tower kernel: mpt2sas0: log_info(0x31120436): originator(PL), code(0x12), sub_code(0x0436) Nov 22 22:16:33 tower kernel: mpt2sas0: log_info(0x31110d01): originator(PL), code(0x11), sub_code(0x0d01) Nov 22 22:16:34 tower kernel: mpt2sas0: log_info(0x31110d01): originator(PL), code(0x11), sub_code(0x0d01) Nov 22 22:16:38 tower kernel: mpt2sas0: log_info(0x31120436): originator(PL), code(0x12), sub_code(0x0436) Nov 22 22:16:40 tower kernel: mpt2sas0: log_info(0x31110d01): originator(PL), code(0x11), sub_code(0x0d01) Nov 22 22:17:01 tower last message repeated 12 times Nov 22 22:17:03 tower kernel: mpt2sas0: log_info(0x31120303): originator(PL), code(0x12), sub_code(0x0303) Nov 22 22:17:05 tower kernel: mpt2sas0: log_info(0x31110d01): originator(PL), code(0x11), sub_code(0x0d01) Nov 22 22:17:17 tower last message repeated 7 times Nov 22 22:17:19 tower kernel: mpt2sas0: log_info(0x31120303): originator(PL), code(0x12), sub_code(0x0303) Nov 22 22:17:23 tower kernel: mpt2sas0: log_info(0x31120436): originator(PL), code(0x12), sub_code(0x0436) Nov 22 22:17:24 tower kernel: mpt2sas0: log_info(0x31110d01): originator(PL), code(0x11), sub_code(0x0d01) Nov 22 22:17:27 tower last message repeated 2 times Nov 22 22:17:42 tower kernel: mpt2sas0: log_info(0x31111000): originator(PL), code(0x11), sub_code(0x1000) Nov 22 22:17:44 tower kernel: mpt2sas0: log_info(0x31110d01): originator(PL), code(0x11), sub_code(0x0d01) Nov 22 22:18:14 tower last message repeated 16 times Nov 22 22:18:18 tower kernel: mpt2sas0: log_info(0x31120436): originator(PL), code(0x12), sub_code(0x0436) Nov 22 22:18:23 tower kernel: mpt2sas0: log_info(0x31120436): originator(PL), code(0x12), sub_code(0x0436) Nov 22 22:18:24 tower kernel: mpt2sas0: log_info(0x31110d01): originator(PL), code(0x11), sub_code(0x0d01) Nov 22 22:18:33 tower last message repeated 3 times Nov 22 22:18:34 tower kernel: mpt2sas0: log_info(0x31120303): originator(PL), code(0x12), sub_code(0x0303) Nov 22 22:18:37 tower kernel: mpt2sas0: log_info(0x31110d01): originator(PL), code(0x11), sub_code(0x0d01) Nov 22 22:18:40 tower last message repeated 2 times Nov 22 22:18:47 tower kernel: mpt2sas0: log_info(0x31120436): originator(PL), code(0x12), sub_code(0x0436) Nov 22 22:18:48 tower kernel: mpt2sas0: log_info(0x31110d01): originator(PL), code(0x11), sub_code(0x0d01) Nov 22 22:19:00 tower last message repeated 6 times Nov 22 22:19:05 tower kernel: mpt2sas0: log_info(0x31120436): originator(PL), code(0x12), sub_code(0x0436) Nov 22 22:19:07 tower kernel: mpt2sas0: log_info(0x31110d01): originator(PL), code(0x11), sub_code(0x0d01) Nov 22 22:19:36 tower last message repeated 14 times Nov 22 22:19:57 tower kernel: sd 0:0:11:0: attempting task abort! scmd(f244fd80) Nov 22 22:19:57 tower kernel: sd 0:0:11:0: [sdm] CDB: Nov 22 22:19:57 tower kernel: cdb[0]=0x85: 85 08 0e 00 00 00 01 00 00 00 00 00 00 00 ec 00 Nov 22 22:19:57 tower kernel: scsi target0:0:11: handle(0x0014), sas_address(0x5001e677b9e4dff7), phy(23) Nov 22 22:19:57 tower kernel: scsi target0:0:11: enclosure_logical_id(0x5001e677b9e4dfff), slot(23) Nov 22 22:19:58 tower kernel: sd 0:0:11:0: task abort: SUCCESS scmd(f244fd80) Nov 22 22:20:00 tower kernel: mpt2sas0: log_info(0x31110d01): originator(PL), code(0x11), sub_code(0x0d01) Nov 22 22:20:11 tower last message repeated 4 times Nov 22 22:21:40 tower kernel: mpt2sas0: log_info(0x31111000): originator(PL), code(0x11), sub_code(0x1000) Nov 22 22:21:42 tower kernel: mpt2sas0: log_info(0x31110d01): originator(PL), code(0x11), sub_code(0x0d01)[/pre]
  10. I've got the X10SL7-F booting to ESXI, and have been testing it / setting it up for a few weeks. Its been working for my test drives to 2TB (I don't have any larger spares lying around.) My 'big move' to move all my drives from my old build to the virtualized build should happen tonight. I've got a few of the WD Red 3TB's, and will post success/ failure after I put them in. (Though I'm a bit nervous now, especially since I've got an Intel expander between the 2308 and the drives!) I do have my 2308 flashed to 16 IT firmware.
  11. I just got my ESXI box up and running, and am running into this issue with a Debian VM sending files to unRAID. (unRAID on a separate box yet, Debian VM using OmniOS/ZFS VM as working storage). Was there resolution in the release build, and / or any way to fix this other than unmounting / remounting the share? Or just mounting as Samba and not fighting the fight? (No cache drive, over a user share)
  12. http://www.newegg.com/Product/ComboDealDetails.aspx?SID=mBqzdDi8EeOsNupU7pyETQkGN4_ewnk3_0_0_0&AID=10440897&PID=1225267&nm_mc=AFC-C8Junction&cm_mmc=AFC-C8Junction-_-cables-_-na-_-na&ItemList=Combo.1455466&cm_sp= It's not listed in the description, but these are the DT01ACA300 drives, except they come with the extra year of warranty.
  13. Finally ready to retire my 6+ year old Asus P5B-VM DO with a new setup, and wanted to try my hand at virtualization, running (1) Unraid (2) Plex / sab / etc (3) ipcop or pfSense and (4) Minecraft (4) WinServer 2012 My proposed setup would bring my 13 (so far) drives from my current setup over to the new build. Case: Norco 4224 MB: Supermicro X10SLH-F http://www.newegg.com/Product/Product.aspx?Item=N82E16813182822 Ram: 16GB Samsung ECC Unbuffered http://www.superbiiz.com/desc.php?name=D38GE1600S CPU: Intel Xeon (Haswell) E3-1240v3 HBA: 2 x M1015 PSU: Corsair AX760 If I expand my Unraid pool past 16 drives, I would look at the intel expander. I've done the math, and it seems like going M1015 => 8 Drives and M1015 => Intel => 16 drives (2 in / 4 out on the intel) all would stay ahead of theoretical max of spinning disks. ( Fairly certain I've just talked myself into starting with one m1015 => intel unless someone sees an advantage I don't) The part I'm really not sure about is the datastore for esxi. Can I just mirror two drives (or two sets of 2) on the motherboard SATA ports? I have an AOC-SASLP-MV8 from the old system I can pull if necessary (if the MB ports are not usable), but I'd rather not. Thoughts? Thanks!
  14. The new wave of Intel 775 MBs are out, and some are sporting 8 SATA port on-board (using the ICH9R). They also support port multipliers, meaning you could hang all 14 drives right off the MB! Is this chipset supported in the Linux 2.6 Kernel that Unraid 4.0 is based on? (I'm in the final pre-purchase phase for my server, so a big Thanks! for the help!) Thanks!
  15. The plan is to integrate one once I pass the 6 drive mark, but browsing on newegg, there are a couple of PCI Express 4X 4-port SATA II controllers under $200. I don't know if they are compatible with unRAID, but I should probably check SoNNeT TSATAII-E4i PCI Express x4 SATA II Controller Card RAID 0/10 was $169.99 http://www.newegg.com/Product/Product.aspx?Item=N82E16816122014 and HighPoint RocketRAID 2310 PCI Express x4 (x8 and x16 slot compatible) SATA II Controller Card was about $129.99 IIRC http://www.newegg.com/Product/Product.aspx?Item=N82E16816115027
  16. I am planning a home media server, and this product REALLY looks promising. In consideration for going 'whole hog', I think a Stacker case, IcyDock trays, and a bunch of WD 5000AASK 500 Gig drives would be my jumping off point. I would pair this with a Celeron D (Cedar Mill, or Conroe-L if it gets released by then) My main concern is the motherboard. In my research, it looks like the ICH8R is by far the fastest south bridge to get on a MB. Add to that the (eventual) need for SATA add-in cards, and I was considering a MB with at least two 4X Lane PCI Express slots (6 Standard SATA + 2 PCI-Express cards for 4 each, a total of 14 drives... seems like a nice number ). It is my understanding the PCI Express is point to point, rather than a shared bus ala PCI, so would be preferable. Are all my assumptions correct up to now? (I am still a noob at the storage side of performance) So my question: The cheapest two boards I could find (on the egg) that fit the bill were : (They both seem to have GigE on PCI-E also) MSI P965 Platinum http://www.newegg.com/Product/Product.aspx?Item=N82E16813130055 Lan Controller: Realtek RTL8111B ABIT AB9 QuadGT http://www.newegg.com/Product/Product.aspx?Item=N82E16813127019 Lan Controller: Realtek RTL8810SC Does unRAID support those lan controllers? Do these MB look like a go? Any other (potential) issues I have not considered? Thanks!