BobPhoenix

Members
  • Posts

    3031
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by BobPhoenix

  1. The expanders I currently have are built into my Chenbro NR40700 cases (on the drive back planes) - so probably a Chenbro expander. I have a few Intel RES2SV240s that I used before I switched to the LSI-9201-16i's in my Norco cases. Last change I made was to increase my drive density. I went back to expanders - the Chenbro expanders built into the NR40700 cases.
  2. I have 18 8TB WD Red non-pro drives. Working great. But I still am going to get HGST NAS drives on my HGST servers - if I can - for a reasonable price. My biggest problem is OEM verses retail. I'll take a retail packaged NAS drive over an OEM any time. The WD drives were OEM and have been fine but the only drives that have really given me problems have been OEM packaged and poor packaging from the sellers.
  3. What I do is just run the last cycle or two depending on where it was when I canceled it. I tell it to skip the pre-read if it was past the initial pre-read on the first cycle.
  4. Yes parity check runs separately for the VM unRAID and the HOST unRAID. SInce I am using expanders rather than directly connected to a controller the speed may be reduced but I think it is mostly the drives. It has nothing to do with unRAID running in a VM. unRAID VM speeds: Average speed: 104.6 MB/sec with 11 device array of WD 4TB 5400rpm Red drives. Speed is limited by the drives. unRAID Host speeds: Average speed: 139.2 MB/sec with 17 device array of HGST 6TB 7200rpm NAS drives. Speed might be limited by expander. Sounds possible. That is essentially what I'm doing except internally. But I have not tried it so cannot confirm it will work. The drives are not visible to the HOST when you pass through the controller to the VM. You use the standard GUI connected to the VM to control them. NOTE: I had to use xen-pciback.hide since I have two LSI controllers instead of vfio-pci.ids since the two controllers have the same ID information and only one of them is passed through to the unRAID VM the other controller is used on the unRAID HOST.
  5. Change GPU = VNC / QXL to cirrus for the install. You can change it back after install if you need to.
  6. I have 4 unRAID servers on 2 48 bay Chenbro cases. Each box has one bare metal unRAID server hosting a VM unRAID server. The 48 bays are split at 24 drives per back plane and the back planes are SAS expanders so can't split them up except by back planes. So it was natural to give the VM and the bare metal unRAID servers on each box 24 drives each. I have one 9211-8i class card for the Bare Metal server and another passed through to the VM Server going to the 2 back planes.
  7. I use 9201-16i server pulls. I have several. You could also use a SAS expander. I have that in two of my 48 bay Chenbro servers.
  8. Using megarec and sbrempty are part of the steps I used to flash my H310s to LSI IT mode. Basicly I did the following steps (from _READMEFIRST.txt file in Toolset_PercH310 to LSIMegaraid zip file - The bat's were in H310_to_LSI.zip which was contained in the Toolset_PercH310 to LSIMegaraid zip file): 7) Launch each step, Step 1, 1.bat (This will dump all the information about your card to a file Named ADAPTERS.TXT In this file is your SAS Address which will be required for step 6 (6.bat) Example: HW Configuration ================ SAS Address : 500605b001f31fa0 8) Step 2, 2.bat will save your current controllers SBR to a file Mega.sbr, please rename it to the cards model your using this on, example IBM M1015, rename mega.sbr to SBRM1015.bin and post on the forum (would like to collect all SBRs) 9) Step 3, 3.bat will wipe your current SBR and clear your controllers BIOs. 10) Step 4, Shutdown your system now, plug your USB stick in another system were you can get at the ADAPTERS.TXT (that was dumped in step 1) file to write donw your SAS Controllers Address Hint: prepare step 6 by going to directory "5_LSI_P16" and editing 6.bat where u insert your Controller Address. 11) Put back usb stick in the system with the Mega card and boot to usb again. _____________from here on modified to flash DELL Perc H310_____23.08.2013_________Fireball IMPORTANT NOTE: The batch files are prepared in a way that the IT-fw will be flashed with NO BIOS!. If you need the BIOS then edit the batch files and make sure the command includes the "-b mptsas2.rom" (simply remove the REM) 12) Step 5.1 - flash the original DELL IT-firmware move into directory "5_DELL_IT" and call 5IT.bat 13) Step 5.2 - flash the LSI 9211-8i (P7) IT-firmware move into directory "5_LSI_P7" and call 5IT.bat 14) Step 5.3 - flash the LSI 9211-8i (P16) IT-firmware move into directory "5_LSI_P16" and call 5IT.bat 15) Step 6 - reprogram the Adapter Address call 6.bat in the directory "5_LSI_P16" that you have prepared in step 4. If you didn't prepare the batch file you have to issue the command manually. Obtain your SAS Address from the ADAPTERS.TXT file. I wanted the bios so I added the " -b mptsas2.rom" to the end of the commands. Hope that helps because all I've ever done is use the .bat files on a MB that I found worked to flash my IBM M1015's. I've never used the UEFI method. If you want the TOOLSET file I used I can upload it if it isn't already in one of Fireball3's posts.
  9. It is my favorite 16 port controller. I'm not familiar with any others so have no comparison to give you. But the 9201-16i is only IT mode so is plug and play with unRAID.
  10. No it is the NR40700 a 48 bay case. Sounds interesting. I've got a shelf for the 2nd Chenbro that I got that didn't have rails but you just pull the case out the shelf is not movable like the rails. I like the rails I got for my other Chenbro it let you pull it out a long way and l doubt I could mod a rail to work as well. I'm always looking to see if anyone is selling them on eBay.
  11. They make newer models https://www.rackmountpro.com/product/1817/RM43348.html I just don't want to spend 4K on a case. That why I was sticking with my 24 bay 4224 norcos until I saw the EOL chenbro case on ebay for 500. Biggest problem I have with the chenbro I've got is the proprietary rails used. If it used standard rails I might have a third.
  12. True. It does depend on if you have SINGLE verses DUAL link to the SAS expander. Not sure what the bandwidth would be with a SINGLE link like I had when I took my measurement. But with ~16 drives the parity check speed was reduced by about 25% verses the same ~16 drives connected to 2 M1015 controllers.
  13. The Chenbro has 2 back planes each controls 24 drives each. I doubt there is a NON SAS Expander version. The cases and backplanes are past end of line. No they just multiply the number of drives able to connect to a single controller. Since they are proprietary you may have trouble finding a replacement. If they go bad a new case would be likely. In my case I would just use my Norco 4224's that I swapped out. All of my VMs are headless as in no keyboard, video or mouse. I just RDP into the Windows OSs and VNC in to the Linux ones. It should be very easy to pass through USB devices to a VM. I have an unRAID flash key just passed through on one box to the unRAID VM. I just selected the USB flash drive in the GUI for the VM and it was available in the VM. The other box I had to pass through the whole controller but that was because when unRAID booted off one flash drive it would find the other one and connect the wrong one as "flash/boot" device - so solved it by passing the whole controller to VM. So not going to be much help with Video, Keyboard and Mouse pass through. Sorry.
  14. I have a little of that myself. Why it takes me longer than some to post since I have to read and reread every post. And then a third time after I post to fix what I missed before.
  15. Not as far as I know. When I bought my first case I didn't realize it used a backplane with a SAS Expander so was a little disappointed but I got used to it. I had used a SAS Expander before I got the 9201-16i's to keep the PCIe slot usage down - so was used to them. The SAS Expander on the backplanes appear to be more reliable and faster than the Intel RES2SV240's I was using before I got the Chenbro cases. Most of the difference in speed was likely because I was only using a single connector from my IBM 1015's to the RES2SV240 rather than the DUAL link I can get to the Chenbro. Yes. I have 2 USB flash drives in each case. On one box I can just pass through the 2nd USB flash to the VM directly. The other box didn't work that way (different MB that didn't work the same) so I had to pass through a WHOLE USB controller to make it work. Actually I wouldn't mind if the limit per array was reduced to 24 drives as long as you could have multiple arrays in one box.
  16. No Slackware. Don't believe it has a package manager. The OS drive is a flash drive that is the key to the license. The OS is loaded from the flash drive into RAM and run from there so the reads and writes to the flash drive are kept to a minimum to make them last longer and allow the fastest access possible since it is RAM based not drive based once it is up and running. The hardware compatibility is here: https://wiki.unraid.net/Hardware_Compatibility but not sure how up to date it is. I would not expect any problems as it runs on most anything as far as I know. The rest I will leave to others with better knowledge to answer.
  17. It can. The 9201-16i has 4ports and handles 16 drives. The second one handles and additional 16 drives. But my Norco 4224 only handles 24 drives without mods. The backplanes in the Chenbro have the SAS Expander built into them and you cannot bypass them. So the only way to use the case is to use the SAS Expander. It lets me use all 48 bays in the case. If I didn't have an unRAID VM running on the unRAID host I would only be able to use 30 ports for the array. The rest (18) would have to be external drives to the array. If unRAID allowed multiple arrays on one box I wouldn't have done the VM. It has been asked for (multiple arrays) but I haven't seen that it has been implemented. Also the maximum in the array is still 30 (the most I would want in a single array anyway) as far as I know. The biggest thing I like about unRAID is that the individual drives are readable if removed from the array and if you have more drives fail than you have parity protection you don't loose the whole array of data. And most of the time even if your array "throws" a drive you can still read the info on it. It just isn't in sync with parity so you could retrieve MOST of the info off the drive before it completely fails. I don't believe Freenas can do that or most other solutions I've seen.
  18. Never had a 9201 fail yet so don't know but I don't expect any for years to come. The 9201-16i's have 4 SAS ports which allow connecting 16 SATA drives or in my case 4 SAS to SAS backplane cables. The second 9201-16i uses 2 of the four connectors to the 2 remaining backplane connectors to give me the 24 the case supports. The other 8 can be used (4 internal and 4 external) with case mods to bring the count to 32 drives total. With the Chenbro cases I am running an unRAID VM on an unRAID host. Each instance of unRAID has a LSI-9211 cross flashed HP H310 or IBM M1015 connected to a SAS expander which controls 24 drive bays on the backplane of the case. So I have 4 unRAID servers in 2 cases. 2 host unRAID servers running VMs with one of the VMs being another unRAID server.
  19. I have used SAS Expanders they cut parity check speed by about 25%. Even with that it was averaging 65-136MB/s depending on the drives so not too bad. But my preference is to not use them unless I have to. I have a couple of Chenbro NR40700 cases that support 48 drive bays but use 2 built in SAS expanders. For normal read and write operations I get full speed and parity checks are only once a month so I can live with the slower parity check speeds for the increased drive density. My other 24 bay server uses LSI-9201-16i cards to reduce the PCIe slot usage to save them for tuner cards to record OTA and ClearQAM TV.
  20. Not as bad: I lived in Apartment 13 at 133 Beetle Drive for a year with no ill effects.
  21. So you have enabled "SMB 1.0/CIFS File Sharing Support" in Windows Features (Like attached graphic)? I thought that was required to "trully" enable it. Actually maybe this would have been a better link than the one I used earlier: https://community.spiceworks.com/topic/2106698-windows-10-pro-can-t-see-computers-on-my-network The other thing I have done is disable SMB2/3 on Win10 like documented here: https://support.microsoft.com/en-us/help/2696547/how-to-detect-enable-and-disable-smbv1-smbv2-and-smbv3-in-windows-and
  22. It sounds like all you have to do is rebuild parity once you have established which drives are your parity drives. To me at least. You might see if more people agree with my opinion. I have done this (rebuild my parity) many times. I have done it when I have had to move around a lot of data to get it organized the way I want it or rearrange my disks in a 2 parity system. I just disk connect parity drive 2 (or both if I'm actually moving data and not just disks) then rebuild parity by adding back my parity drives.
  23. See if this helps: https://windowsreport.com/network-discovery-disabled/
  24. What does the XML look like for your NVMe device? What you are looking for is: <boot order='1'/> If that tag isn't on your NVMe device then remove it from the device it is on and add it to the NVMe device. Every time you edit the VM you may have repeat the process.
  25. I just saw this on a thread myself. The multiple pool feature request thread to be specific.