BobPhoenix

Members
  • Content Count

    3031
  • Joined

  • Last visited

  • Days Won

    1

BobPhoenix last won the day on March 3 2017

BobPhoenix had the most liked content!

Community Reputation

63 Good

1 Follower

About BobPhoenix

  • Rank
    Advanced Member

Converted

  • Gender
    Male

Recent Profile Visitors

2005 profile views
  1. The expanders I currently have are built into my Chenbro NR40700 cases (on the drive back planes) - so probably a Chenbro expander. I have a few Intel RES2SV240s that I used before I switched to the LSI-9201-16i's in my Norco cases. Last change I made was to increase my drive density. I went back to expanders - the Chenbro expanders built into the NR40700 cases.
  2. I have 18 8TB WD Red non-pro drives. Working great. But I still am going to get HGST NAS drives on my HGST servers - if I can - for a reasonable price. My biggest problem is OEM verses retail. I'll take a retail packaged NAS drive over an OEM any time. The WD drives were OEM and have been fine but the only drives that have really given me problems have been OEM packaged and poor packaging from the sellers.
  3. What I do is just run the last cycle or two depending on where it was when I canceled it. I tell it to skip the pre-read if it was past the initial pre-read on the first cycle.
  4. Yes parity check runs separately for the VM unRAID and the HOST unRAID. SInce I am using expanders rather than directly connected to a controller the speed may be reduced but I think it is mostly the drives. It has nothing to do with unRAID running in a VM. unRAID VM speeds: Average speed: 104.6 MB/sec with 11 device array of WD 4TB 5400rpm Red drives. Speed is limited by the drives. unRAID Host speeds: Average speed: 139.2 MB/sec with 17 device array of HGST 6TB 7200rpm NAS drives. Speed might be limited by expander. Sounds possible. That is essentially what I'm doing
  5. Change GPU = VNC / QXL to cirrus for the install. You can change it back after install if you need to.
  6. I have 4 unRAID servers on 2 48 bay Chenbro cases. Each box has one bare metal unRAID server hosting a VM unRAID server. The 48 bays are split at 24 drives per back plane and the back planes are SAS expanders so can't split them up except by back planes. So it was natural to give the VM and the bare metal unRAID servers on each box 24 drives each. I have one 9211-8i class card for the Bare Metal server and another passed through to the VM Server going to the 2 back planes.
  7. I use 9201-16i server pulls. I have several. You could also use a SAS expander. I have that in two of my 48 bay Chenbro servers.
  8. Using megarec and sbrempty are part of the steps I used to flash my H310s to LSI IT mode. Basicly I did the following steps (from _READMEFIRST.txt file in Toolset_PercH310 to LSIMegaraid zip file - The bat's were in H310_to_LSI.zip which was contained in the Toolset_PercH310 to LSIMegaraid zip file): 7) Launch each step, Step 1, 1.bat (This will dump all the information about your card to a file Named ADAPTERS.TXT In this file is your SAS Address which will be required for step 6 (6.bat) Example: HW Configuration ================ SAS Addres
  9. It is my favorite 16 port controller. I'm not familiar with any others so have no comparison to give you. But the 9201-16i is only IT mode so is plug and play with unRAID.
  10. No it is the NR40700 a 48 bay case. Sounds interesting. I've got a shelf for the 2nd Chenbro that I got that didn't have rails but you just pull the case out the shelf is not movable like the rails. I like the rails I got for my other Chenbro it let you pull it out a long way and l doubt I could mod a rail to work as well. I'm always looking to see if anyone is selling them on eBay.
  11. They make newer models https://www.rackmountpro.com/product/1817/RM43348.html I just don't want to spend 4K on a case. That why I was sticking with my 24 bay 4224 norcos until I saw the EOL chenbro case on ebay for 500. Biggest problem I have with the chenbro I've got is the proprietary rails used. If it used standard rails I might have a third.
  12. True. It does depend on if you have SINGLE verses DUAL link to the SAS expander. Not sure what the bandwidth would be with a SINGLE link like I had when I took my measurement. But with ~16 drives the parity check speed was reduced by about 25% verses the same ~16 drives connected to 2 M1015 controllers.
  13. The Chenbro has 2 back planes each controls 24 drives each. I doubt there is a NON SAS Expander version. The cases and backplanes are past end of line. No they just multiply the number of drives able to connect to a single controller. Since they are proprietary you may have trouble finding a replacement. If they go bad a new case would be likely. In my case I would just use my Norco 4224's that I swapped out. All of my VMs are headless as in no keyboard, video or mouse. I just RDP into the Windows OSs and VNC in to the Linux ones. It should be very easy to
  14. I have a little of that myself. Why it takes me longer than some to post since I have to read and reread every post. And then a third time after I post to fix what I missed before.
  15. Not as far as I know. When I bought my first case I didn't realize it used a backplane with a SAS Expander so was a little disappointed but I got used to it. I had used a SAS Expander before I got the 9201-16i's to keep the PCIe slot usage down - so was used to them. The SAS Expander on the backplanes appear to be more reliable and faster than the Intel RES2SV240's I was using before I got the Chenbro cases. Most of the difference in speed was likely because I was only using a single connector from my IBM 1015's to the RES2SV240 rather than the DUAL link I can get to the Chenbro.