lazant

Members
  • Posts

    41
  • Joined

  • Last visited

Everything posted by lazant

  1. Can someone explain how I go about removing my 2nd parity disk before I start the mirroring process for upgrading file systems? Do I just unassign the parity 2 drive? Thanks.
  2. I’m finally getting around to converting file systems on my ancient Atlas build from reiserfs to btrfs or xfs (haven’t decided yet). I currently have 12 disks in my array, 10x 3TB WD Greens and 2x 3TB Seagate Barracudas for parity. One of the data drives is having read errors so I’m going to replace it. I picked up 5x 6TB WD Red Pro’s the other day when Amazon was having a sale. My plan, eventually, is to replace the parity drives with these, add a cache drive and keep a couple precleared and ready to go in case of drive failures. Here’s the order I’m planning to do things in: 1. Replace parity drive 1 with new 6TB Red Pro 2. Unassign parity drive 2 3. Replace data drive having read errors with 6TB Red 4. Use mirroring method to upgrade array to new file system 5. Add 2nd parity back to array 6. Add cache drive Does this order make the most sense? I would also like to eventually convert to running unraid bare metal. As it stands I have it running as one of 4 VMs under esxi.
  3. Ok that's what I figured. Just though I would ask. Thanks.
  4. I have another question for you. During a rebuild, is it possible to convert the filesystem? All of my drives are still on ReiserFS and it would probably be a good idea to start converting. Or is it best to just do the mirroring method?
  5. So the drive had another read error in a different slot on a different backplane. I guess that's good news because it's easy to replace the drive. Thanks again for all your help.
  6. So everything went well with the rebuild, but as you anticipated, it is now a week later and it has happened again I think. I have the Norco 4224 case which has 4 drives per cable/backplane. I'm just wondering what my next step should be. Should I replace the backplane even though I'm not experiencing any issues with the other 3 drives on the backplane with disk 10? Replace the cable going to the backplane? Abandon that slot and put disk 10 in a different tray? Thanks for your help.
  7. Thanks. Rebooted and you're correct, it's now sdn. Waiting for an extended SMART report now and then I'll rebuild.
  8. I just recently upgraded from 5.04 to 6.7.0 and discovered one of my disk had 2 read errors. I then went to try to get a SMART report for the disk (sdh) but when I click the Spin Up button, nothing happens. It still says "Unavailable - disk must be spun up". Looking at the logs I see this: May 22 21:51:18 Asteria kernel: mdcmd (113): spinup 10 May 22 21:51:18 Asteria kernel: md: do_drive_cmd: lock_bdev error: -2 Any thoughts? Thanks in advance.
  9. Hey, just wondering if you finished this build and how it's working? I'm considering the X10DRi-T4+ along with 2 E5-2630 v4s and wanted to check if you had any issues. Thanks.
  10. I'm building my second unraid server and need some advice on a motherboard. I'll be using the server primarily for storage, PMS (5 max 1080p streams), web server, ownCloud, and 2-4 Linux/Windows VMs. Here is the hardware I've selected so far, but I'm open to suggestions: Case: http://lime-technology.com/d-316m-server-case/ $479 Motherboard: Need IPMI CPU: E5-2650V4 $1150 RAM: 4x16GB 2133MHz DDR4 ECC $350 SATA Controller (if MB doesn't have enough ports): SYBA SI-PEX40064 $30 PSU: HDD: HGST hms5c4040ble640 $138x10 Or HGST Deskstar NAS 4TB 7200RPM $163x10 SSD: Unraid server pro license: $59 Thanks in advance for any advice you can provide.
  11. I'm stuck trying to get my unraid shares to mount using autofs in my Ubuntu VM. I've turned on NFS and read the autofs documentation but can't seem to get it to work. Any help would be greatly appreciated. Here is what my auto.master file looks like: +dir:/etc/auto.master.d +auto.master /home/lazant/Asteria /etc/auto.nfs Here is what my auto.nfs file looks like: Movies 192.168.0.24:/mnt/user/Movies Edit: I am able to mount the share fine using: sudo mount -t nfs 192.168.0.24:/mnt/user/Movies ~/Asteria
  12. I can confirm I'm using a Xeon 1240 V2 (Ivy Bridge) without issue.
  13. I have the X9SCM-F with a Xeon 1240V2 (Ivy Bridge) and it is running fine so far, though all I have been doing is preclearing. I was able to flash my M1015 on my friends ASRock Z77 board after getting the PAL error by using the UEFI shell. Maybe it would have worked on the X9SCM too? I thought that once you got the PAL error it was not possible on that board, but it worked. I followed the instruction from these two pages: http://forums.laptopvideo2go.com/topic/29059-sas2008-lsi92409211-firmware-files/ http://lime-technology.com/forum/index.php?topic=25891.msg225711#msg225711 These two commands worked from DOS: megarec -writesbr 0 sbrempty.bin megarec -cleanflash 0 then I got the PAL error after rebooting and trying sas2flsh, so I followed newbie_dude's instructions at the 2nd link above and it worked. Good luck. Let me know if you get stuck anywhere.
  14. Does anyone know the comparable settings that must be set for an m1015/RES2SV240? My board came with 2.0b. I found all the other settings in the BIOS but don't see this one anywhere. Anyone know what the setting is in 2.0b? Thanks.
  15. Hey all, I've decided as the space on my 10TB Synology DS1010+ is dwindling that I would bite the bullet and build my own server. As this is the first computer I will have built, I thought who better to turn to for guidance than the friendly folk on the UnRAID Forums, so any help/insight you might be able to provide would be greatly appreciated. First let me describe the uses I would like to have the server fulfill: Run an UnRAID VM for my home media Run an Ubuntu VM with SAB, SB, CP and HP along with Plex Media Server. To give an idea of computing requirements, I'd like to be able to transcode 3-4 1080p MKV streams to iPhones simultaneously. Run a VM to host a couple low traffic websites that I have (open to suggestions on OS) Run a VM for statistical computing (primarily with R) I'm thinking I will run these VMs atop ESXi 5.1. I would also like to run these on virtual machines that are backed up in case something goes wrong and I need to fix things, but I also want to minimize the effort I need to put into maintaining the system. Now I'll tell you the components I already have: Case, UPS, etc.: Norco RPC-4224 4U Rackmount Server Case with 24 Hot-Swappable SATA/SAS Drive Bays Tripp Lite SRW12US 12U Wall Mount Rack Enclosure Server Cabinet CyberPower PR2200SWRM2U 2200VA 1500W Pure Sine Wave UPS Storage: 5x 2TB Black Caviar Drives 10x Western Digital Caviar Green 3 TB SATA III 64 MB Cache Bare/OEM Desktop Hard Drive - WD30EZRX Kingston HyperX 3K SH103S3/120G 2.5" 120GB SATA III MLC Internal Solid State Drive (SSD) (Stand-Alone Drive) I was thinking for CPUs, I would get: 2x Xeon E5-2620s but I'm open to suggestions. I was looking at these MOBOs: http://www.newegg.com/Product/Product.aspx?Item=N82E16813182354 http://www.newegg.com/Product/Product.aspx?Item=N82E16813121589 http://www.newegg.com/Product/Product.aspx?Item=N82E16813182351 http://www.newegg.com/Product/Product.aspx?Item=N82E16813182348 but they don't seem to support VT-D. I've checked http://vm-help.com/esx40i/esx40_vmdirectpath_whitebox_HCL.php and http://www.vmware.com/resources/compatibility/search.php?deviceCategory=server but the former seems rather outdated and the latter yields no results when I check the appropriate requirements ("VM Direct Path IO). I'd also like IPMI on the MOBO since it will be tucked away in the rack in a closet. I know I still need a MOBO, CPU(s), RAM, PSU and Hard Drive Controllers/RAID card but would like to get input on what to get from the community, especially concerning the MOBO. Also, let me know if you think I need extra fans, cables and other misc. supplies. Thanks in advance! -Tony P.S. As this is my first build, if anyone is in the Chicago area and would like to help me, I'd gladly buy a couple cases of beer or pay for your time/guidance. Just PM me.
  16. lazant

    Norco 4224 Thread

    Just wanted to update the thread with some pics of the latest rev of the 4224: