Tybio

Members
  • Posts

    610
  • Joined

  • Last visited

Everything posted by Tybio

  1. I have one cable connected to the server in the port shown as ethernet 1 in the manual...NOT the IPMI dedicated port. Unraid comes up, and I can get IPMI working. eth0 shows that it has link, and has the correct IP address on it...but it is unreachable from the network and it can't get out. The odd thing is that Linux has 192.168.1.2 configured on eth0 as it should, but IPMI shows up in the scan as 192.168.1.154 I'm not sure how to track this one down. Should I just disable both of the onboard eth ports and use the IPMI port....or....amm. I'm actually out of ideas.
  2. Just FYI, I got a F-O today and it shows Bios version 2.0
  3. Not sure if this one is dead or not, but there is a 5.x version that has a lot of support chatter in the plugins forum for 5.0 release.
  4. It's simple, but people tend to over-think it. Take a stab at reading this part of the FAQ and let us know if you still have questions! http://lime-technology.com/wiki/index.php/FAQ#How_does_parity_work.3F
  5. Ok, the best resource for this is the compulsive design forum, and green-leaf's prototype builds: http://www.greenleaf-technology.com/blogs/prototypes/index.php If you just post an open ended question, the community can't answer as each member only has direct knowledge of a few motherboards (Other than the greenleaf guys that is .
  6. If the price gap is only $50 in NZ I'd say the same...but it could be a lot more dear for special boards like that. I think someone was testing the Extreme6 board, might want to search the forum for that...obviously a different board but should give you some idea of how that "series" of boards does in general...and major issues.
  7. Yea, I had to fudge it due to a media center solution that needed all of the media in one directory. Now that that is over (thank god) I can go back to splitting them up. Thanks for the advice .
  8. Apparently my upgrade to 5.0 blanked out my split level setting...and even before then things were getting out of hand. So I have a question for the group at large... In an array that's gotten out of hand, what is the best way to recover and get directories on the same drive? The thought of going directory by directory and manually moving files around just hurts my brain. I was thinking about creating new shares and just moving things over to the cache drive and letting the mover fix things up. Right now I have: Media -Movies -<Movie directories> -TV -<Show> -<Season> My thought is to create two shares, TV and Movies so I can deal with them separately. Then migrate all the media..movies would be easy to just move on the same disk (99% of them are one file). However...recovering the TV shows is going to be a huge effort. I'm just not sure if it would be better to "move" them to my desktop one at a time...then move them back into the TVs share...or if there is some other trick people have? So people know, I have a largeish TV collection...which is why doing it by hand hurts my brain: root@Storage:/mnt/user/Media# du -sh TV 5.6T TV root@Storage:/mnt/user/Media# du -sh Movies 4.8T Movies root@Storage:/mnt/user/Media#
  9. Thanks man, that is fantastic information! Even if it is what I feared..that that cable sucked
  10. I wouldn't get them for data drives just yet, but as I'm building a new server...getting one for Parity so disk size isn't an issue in the new server seems like a good investment . They are a little high in price, but for parity...I'll take the hit.
  11. Mine works fine: APC UPS Status APC 001,036,0899 DATE 2012-07-13 20:14:10 -0400 HOSTNAME Storage VERSION 3.14.10 (13 September 2011) slackware UPSNAME Storage CABLE USB Cable DRIVER USB UPS Driver UPSMODE Stand Alone STARTTIME 2012-07-11 09:30:11 -0400 MODEL Back-UPS BX1500G STATUS ONLINE LINEV 123.0 Volts LOADPCT 14.0 Percent Load Capacity BCHARGE 100.0 Percent TIMELEFT 46.4 Minutes MBATTCHG 10 Percent MINTIMEL 10 Minutes MAXTIME 300 Seconds SENSE Medium LOTRANS 088.0 Volts HITRANS 139.0 Volts ALARMDEL 30 seconds BATTV 27.2 Volts LASTXFER Low line voltage NUMXFERS 0 TONBATT 0 seconds CUMONBATT 0 seconds XOFFBATT N/A SELFTEST NO STATFLAG 0x07000008 Status Flag SERIALNO ########### BATTDATE 2011-06-10 NOMINV 120 Volts NOMBATTV 24.0 Volts NOMPOWER 865 Watts FIRMWARE 866.L5 .D USB FW:L5 END APC 2012-07-13 20:14:13 -0400
  12. Hey man, Just wondering how that 1 -> 7 Molex is working for powering all of the drives, I see you only connected one of the two power points on the norco disk caddies... I'm asking because I just ordered the same case and want to make sure that splitter will work to power up all 24 bays. Or if I should get two of them....also, powering all 24 disks from one PSU feed, isn't that over the specs of the molex plug on the PSU side?
  13. split level only comes into play when the fill type is hit. If you have all empty drives than everything is going to write to the first drive until the fill level setting causes unraid to switch to another drive. Check out the wiki for fill level info...that's what your seeing right now. Edit1: Sorry, it's called Allocation method
  14. Thanks, picked up one to be the parity drive in my new build
  15. Unmount the drive from your client systems and be sure that when you mount it again, you don't use the "root" account. If you don't have another user, create one and use that...then be sure that user has permissions to all your shares.
  16. If you are running addons, as a lot of people are, it is not rare at all. Another cause of this is CouchPotato or SickBeard running on a remote system checking the cache drive for newly downloaded SABnzbd files. If you aren't running any addons, and don't have automated systems checking the cache drive..then it should sleep.
  17. The X9SCM-IIF-O is the same as the F-O except for the BIOS and NICs? Odd that Newegg isn't stocking them yet.
  18. Anyone know if you use ext4 and snap will TRIM work properly? Or is trim just not important for this usecase?
  19. Good thought, put in the 850w version of the same drive. Single rail mod PS
  20. also, I see reference to SAS2 cards from supermicro...are the SASLP a good card for a build expecting to last 5+ years?
  21. My current server is a 10 drive from 2007, so limited to 2TB drives and getting very long in the tooth. I'm going to replace it with a new system, and with the update to 24 array drives in the latest 5.0 build...I was wondering if I could get some input on this build: http://secure.newegg.com/WishList/PublicWishDetail.aspx?WishListNumber=24573047 It would be paired with an i3. My main question is the benifit of 3xSAS cards over 1xSAS + Port Expander. I've never worked with those before, and from what I can see it wouldn't save that much to get one of the LSI cards and an expander (or two to hit 24 drives?). Looking for input!
  22. It's about perspective, the cfg file is an optional place to define configuration options for a user share...what DEFINES a user share is that the directory exists. A user share can exist without a .cfg file, but a .cfg file without the directory is an error
  23. What we are trying to say is that a directory created on the root of a disk IS a user share, that's what defines them...that they exist in that location. You /can/ create them from the Web interface, but all that does is a mkdir (and creates a .cfg file, but the .cfg file isn't needed as they don't create shares, just set options for them). That's how unraid works, in the 5 tree you could (for the first time) control some of this by creating shares that exist only on the cache drive, but they are still user shares. There are a few options that I see, perhaps others can think of more [*]Create a user share, limit it to the disk you are interested in, and put all the data under that. [*]Add a feature request to allow the functionality you are looking for [*]Use hidden folders (Good idea mr-hexen, i thought they would be User Shares that would just not be browsable) [*]Use SNAP to mount disks outside of the array, then share them....use those for the non-user-share directories and set up a cron job to copy the data into the array (with rsync or some such system) so it gets backed up (UGLY!) The thing that makes this tricky is defining what directories should be "user shares" and which should be ignored. I suppose a complex smb config could have this work, but the simplicity of unraid is a defining feature...so making things more complex needs to have a very good argument behind it.
  24. /mnt/user is just a joined list of the top level directories on all member disks, it doesn't actually exist. What you are asking for is a feature request, right now all top level directories ARE user shares, you create it when you mkdir on /mnt/disk1...unraid just sees that and lets you set options on it. For instance, I have a disk for my user shares...I exclude that disk from all other shares and set the user directories to only be valid on that disk...then I can use permissions to ensure that users can't see other users data. You can always do that and put all the data you are trying to stick on /mnt/disk1 under that user top level directory...same net effect.
  25. In /theory/ you could bring the array up with the drive failed...copy the data onto a drive that's still working (by using the rebuilt data to do the copy, same way you "see" a failed drives data in a user share because it is constructed on the fly from parity) then when the data is safe on the good drive do a "new config" and start a fresh parity run. In reality? that leaves you unprotected for a long time where you are using the disks a LOT...so it is a high risk operation IMHO