• Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About Josh

  • Rank


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Sorry everyone, been real busy at work. Cases are still for sale
  2. Real life has hit and have been parting out all computer equipment. All that's left are my Norco Cases. Thought I'd throw them on here before I hit ebay RPC-4224 - $300 Shipped SOLD - Not sure what version, has usb port on the front and the dimmer HD lights - Upgraded to 120mm fan plate - Replaced the rear fans to be quieter (still have the originals that sound like jet engines if you want them) - Includes 120mm fans, norco rails (might be missing some screws for the rails) RPC-4020 - $200 Shipped Pending Sale - Stock mid fans, think I changed the back fans to make i
  3. Which MB? Two cards on that motherboard And one card on this mb so far, this is my new machine
  4. running b12a on two servers since it was released, with 3 cards, no block errors, all running v 19 drives total on the controllers with a mix of 3TB and 2TB drives. Also in the middle of pre-clearing 2 x 3tb drives on one of the controllers. I've also do a massive amount of writing to the disks on the controllers and no issues I am however getting these errors on boot, happens on all ata connections that are from each of the controllers with a drive attached. Both systems the same thing Oct 8 02:12:04 Tower kernel: ata9: sas eh calling libata port error handler (Errors
  5. Its telling me I should rebuild-tree, should I go ahead and do it? root@Tower:~# reiserfsck --check /dev/md4 reiserfsck 3.6.21 (2009 ************************************************************* ** If you are using the latest reiserfsprogs and it fails ** ** please email bug reports to, ** ** providing as much information as possible -- your ** ** hardware, kernel, patches, settings, all reiserfsck ** ** messages (including version), the reiserfsck logfile, ** ** check the syslog file for any related information. ** ** If yo
  6. Will do, also just finished a non-correcting parity check and this was the result Sep 16 11:43:54 Tower kernel: md: sync done. time=39315sec Sep 16 11:43:54 Tower kernel: md: recovery thread sync completion status: 0 Sep 16 12:21:18 Tower kernel: REISERFS warning: reiserfs-5090 is_tree_node: node level 840 does not match to the expected one 1 Sep 16 12:21:18 Tower kernel: REISERFS error (device md4): vs-5150 search_by_key: invalid format found in block 448294367. Fsck? Sep 16 12:21:18 Tower kernel: REISERFS error (device md4): vs-13070 reiserfs_read_locked_inode: i/o failure occurred tr
  7. so still have no idea what to do. df command showing that disk 4 has used -15TB of its 2TB space root@Tower:/etc/rc.d/unraid.d# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 3907560 1270364 2637196 33% /boot /dev/md11 1953454928 1922942796 30512132 99% /mnt/disk11 /dev/md5 976732736 943558840 33173896 97% /mnt/disk5 /dev/md18 1953454928 1900403088 53051840 98% /mnt/disk18 /dev/sdd1 1465093832 408558320 1056535512 28% /mnt/cache /dev/md8 976732736 949529776 27202960 98%
  8. yeah don't see myself making a switch anytime soon, just promising in the fact that I know I can migrate another way in the future and won't be tied down with unraid forever, not that its not great, but just how large can it really get and still perform? Figure in a year to a year and a half will be building a third server, need a better way for the machines to communicate with each other. Actually looking at all of my hardware, win7 workstation, ubuntu server and two unraid servers, if I did make a switch I could do it right away and all i'd need to get a are new hard drives to build my
  9. I'm getting to the point where I feel like I may be outgrowing my unraid boxes. 4224 maxed to capacity and a 4020 half full already of 3TB drives. What does my furute of storage needs hold? Just seems likes its growing faster and faster Just this week i've removed all my services from both unraid boxes and just running powerdown and apc. Moved everything else over to a new ubuntu server and things have been running much smoother, not that unraid wasn't smooth in the first place, but everything is now much more snappier On ubuntu i installed a ZFS stripe with 5 old HHD's as my "wo
  10. I had a 2TB WD EARS go bad on me last week. Got the replacement through RMA, precleared and inserted into the array, rebuilt the disk, everything seemed normal However, now the disk is showing as 17.59TB free in the gui. Its disk 4 and all files from disk 4 are there when I browse the drive How do I get this fixed? Syslog is attached thanks Josh syslog-2011-09-14.txt
  11. I'm not going after parity speed and my drives never spin down. Its taking 7-14 minutes to unrar a 10GB file, partly because of my CPU and partly because i'm writing to the disk at the same time the file is un-raring. I did a test, it took 2 minutes to unrar a 10GB file on the cache drive when nothing was being written to the disk. However when SABnzbd is downloading its hitting the 7-14 minute range. I tried both with unrar'ing through sabnzbd and unraring manually To be honest i'm very tempted to build a little 1U machine just to run my services and take unraid back to just storage,
  12. are the on-board controllers on motherboards hardware or software? Not worried about cache failing, I backup whatever the mover script doesn't move daily I didn't mean SSD drives don't work, I meant I can't afford to add 1.5TB of SSD's as a cache drive, I wish :-)
  13. Is it possible to setup some drives in raid 0 and run as my cache drive? I previously ran 4 320GB drives on my windows machine that was my "Scratch" drive that did all the unrar'ing, paring, etc... and it was really fast fast. Since i've moved all of that kind of stuff to unraid its been painfully slow running on a single 7200 drive. SSD drives won't work, need about 1.5TB as my cache. If possible is it as simple as creating the stripe on the controller and booting up? I don't care about temps or spin downs. Just want to know before I bite the bullet and pick up a few more drives th