bobkart

Members
  • Posts

    154
  • Joined

  • Last visited

Everything posted by bobkart

  1. These enclosures work just fine with unRAID. I have four of them, two are being used with unRAID (48TB and 60TB raw capacities). Most any SAS card can connect to them. One downside is that they're not SAS2, so just 3Gb/s times the four channels, i.e. 12Gb/s or about 1.1GB/s total aggregate bandwidth. Sorry to resurrect such an old thread.
  2. You can increase the size of the parity drive just as easily as a data drive. The parity is rebuilt from the data drives, just like how the data on a replaced/upgraded data drive is rebuilt from the rest of the drives. Either operation (Parity Sync or Data Rebuild) is typically an 8- to 16-hour operation (for a 4TB-high array), depending on your system's disk throughput.
  3. Clarification: Over the last few months I've tested many combinations of SATA/SAS motherboards, controllers, port multipliers/expanders, enclosures and drives. For a SAS 4 x 3Gb/s connection (SFF-8087/8088) I typically see *at most* ~1.1GB/s aggregate throughput during unRAID's "all-drives" operations (Parity Check/Sync, Data Rebuild). This despite the math (as you point out) working out to 1.5GB/s (12Gb/s over 8 bits per byte). Most of this is due to the 10-bits-per-byte encoding I mentioned: https://en.wikipedia.org/wiki/8b/10b_encoding, that gets you from 1.5GB/s to 1.2GB/s The rest would seem to be due to various other small sources of overhead along the way, which I've yet to actually determine the details of.
  4. My understanding is that there is overhead in those SATA/SAS connections, for error detection/correction among other things, such that it takes 10 bits per byte. Hence the lower number. In my experience I tend to see more like 110MB/s when saturating SATA/SAS connections with "too many" drives per channel. So there's a bit more overhead creeping in from somewhere.
  5. I just saw this thread for the first time, with it having been bumped to the first page. I've built a few SAS-based unRAID servers over the last few months, and one thing I see in this proposed build is that 20 drives will be funneled through just one 4x6Gb/s SAS connection, reducing the per-drive throughput for "all-drive" array operations to at most 120MB/s.
  6. Yep, that's how it works; unRAID clears the new drive before it's added to the array, to preserve the parity. Only when trying to add a drive that's larger than the parity drive is it more involved than that, i.e. the parity drive must be upgraded to a larger drive first.
  7. Hi Knightfolk, I can't say for sure whether the TS440 will work with unRAID (although I suspect it does), but I know for a fact that the TS140 does: http://www.amazon.com/Lenovo-ThinkServer-70A4000HUX-i3-4130-Computer/dp/B00F6EK9J2 (and it's less than half the price). I built a server with one last year, with drives in an external rackmount enclosure. I've since moved to a different (2U rackmount) machine for those drives (16 x 4TB). So I actually have a TS140 just laying around, I use it to test SAS cards from time to time. Possibly real-time transcoding would be too much for that Core i3 processor, but you mention that you really won't be doing any of that. Clearly the TS440 wins on number of drive bays; since you only mentioned three drives in your initial post, I thought I'd chime in with the TS140 since it has just that many drive bays (and room for at least one more if you remove the optical drive). One final caveat that comes to mind is that the Lenovo motherboards usually use a proprietary power connector (the TS140 does), so upgrading the PSU requires an adapter. I found the TS140 to be very efficient powerwise, and virtually silent; I was only able to better it by going to a miniITX motherboard.
  8. I'm using several LSI SAS cards successfully with unRAID: - 9200 - 9207 - 9211 Both 4-port and 8-port versions, both internal ports and external ports (i.e. -4i, -8i, -4e, -8e). They can be found on eBay, often as low as $100 if you're patient. Right now I have a 9211-8i in my main server (16 x 4TB), here's one on eBay: http://www.ebay.com/itm/New-IT-Mode-9211-8i-SAS-SATA-8-port-PCI-E-6Gb-RAID-Controller-Card-US-Seller-/291372888631 I got a 9240-4i not too long ago, but unRAID didn't see the drives that were connected to it (despite them being shown by the controller during the boot).
  9. Sorry to hear about your migration problems. Not sure where they might be coming from; there might be something flaky regarding the share mounting. In Linux-speak 'stat' is a verb and 'cannot stat' just means the file could not be opened: http://linux.die.net/man/2/stat Fortunately rsync can be used pretty easily for the situation where the target file doesn't yet exist. I use a command like this: rsync -aruv <source-dir> <target-server>:<target-dir> where <target-dir> is the directory on the target server in which you'd like there to be a copy of <source-dir>. Example: you have /mnt/user/share1/dir1 on the source server that you'd like to be copied (only what's missing), to the same share on the target server. rsync -aruv /mnt/user/share1/dir1 192.168.1.xxx:/mnt/user/share1 By adding an 'n' to the options (-naruv) you can do a dry run; it will just tell you what it would have copied. This is very useful for making sure you've got all the details right, before committing to a potentially lengthy copy (although it can be interrupted with ^C).
  10. Keep in mind that the mover will want to remove the source data as they are copied. This may or may not be what you want. In my case it wasn't, I wanted the data in both locations upon completion of the copy. It worked for me to mount the drive(s) from the old server on the new one: cd /mnt mkdir guest mount -t resierfs /dev/sdz1 guest -o ro Note the -o ro, without it changes could be made to the source data, invalidating the parity.
  11. Hi storagehound. I ran it by the systems administrator where I work and it turns out to be more complicated that just increasing the timeout. He seemed to have some idea how to fix the problem, but without getting hands-on with my server it would be hard to apply his ideas. The good news is that I'm not seeing the same problem with any of my new servers since I'm now using XFS for them. It might be impractical to switch drives in an existing server from ReiserFS to XFS, but by all means use XFS for any new drives. Regarding rsync and alternatives, I've used scp successfully: scp <from-directory> <to-server>:<to-directory> I.e. scp Movies server2:/mnt/user I didn't confirm that those commands are *exactly* right but that should get you close. Rsync also seems to work okay, to the extent that I've used it so far.
  12. I may have incorrectly believed that these enclosures support 4x6Gb/s SAS connections. After bringing the number of 4TB drives in my main enclosure up to ten, I'm now seeing a bottleneck during parity checks consistent with the SAS connection being only 4x3Gb/s. Searching online reveals conflicting specs in that regard. It might be that the second SAS connection on the enclosure ("SAS Out") can be used to double the bandwidth to the host; I know it works as well as the SAS In connection because I've tried the SAS Out instead of the SAS In and it worked just as well. I have a second SAS card on order to try to answer that question. If not, these enclosures become less useful if fast parity checks are part of the requirement, since a single 4x3Gb/s connection will hit a bottleneck long before all 16 drives are installed.
  13. I don't believe hotswapping is an option with unRAID, hopefully someone will correct me if I'm wrong. I've *always* powered my unRAID servers down before adding or removing drives. I concur with switchman, 4TB drives make more sense than 2TB drives and even 3TB drives these days. 8GB RAM is more than you need for base functionality, but maybe all the software you're considering adding could use it. Both my unRAID servers use external enclosures for the drives so I can't be much help in that regard, but in my latest build the primary consideration was disk I/O bandwidth. If a second drive fails during a parity rebuild, you'll lose the data on both drives, so minimizing the size of that window of vulnerability was my goal. Basically you just need to make sure the motherboard and disk controllers have sufficient SATA bandwidth to read/write from/to all drives at the same time without bottlenecks. For example, a PCI Express Revision 2.0 x8 slot has (I believe) a bandwidth of 4GB/s, enough for each of the 24 drives to transfer at ~167MB/s, fast enough for most "value" drives. But a controller that can handle 24 drives might be too expensive compared to multiple lesser controllers, so balancing that trade-off becomes the next challenge. So setting aside the enclosure question, which I sidestep, that just leaves finding a suitable combination of motherboard and controllers that hits the right balance of cost and performance. I've made heavy use of NewEgg's 'Power Search' facility over the past month or so!
  14. Sorry to dredge this topic up from nearly three years ago. Just wanting to answer the question. Those enclosures work fine with unRAID. I got one a couple of weeks ago, and another one is showing up tomorrow. I just finished swapping out the two case fans for much quieter ones, drive temperatures are now nicely in the 30C's and you can barely hear the enclosure (power supply has a bit of a whine to it, going to look into that next). I've got two more 4TB drives ready to go in, which will bring it up to 32TB (28TB usable). Parity syncs and checks complete in under 10 hours. All you need is a SAS controller that supports SAS Port Expanders. Because the enclosure has a SAS Out port, One SFF-8088 host port is enough for all 24+ drives (you'll need two enclosures), if you don't mind slowing down past a dozen or so drives (depending on how fast the drives are). Otherwise, a two-port controller (or second single-port controller) and a fast enough PCI Express interface will allow all drives to run as fast as they can during parity syncs and checks. This enclosure represents a tremendous value as my total cost comes to under $480 for a 16-bay system, less than $30 per bay. Using 4TB drives, that only adds ~$7.50 per TB to the total cost per TB. My old unRAID server (64TB) is partially visible in the upper left corner.
  15. Is there a way to increase the timeout on the Windows side so it waits longer for the unRAID server to begin the file transfer before giving up on it?
  16. I've had it for over three years; bought the Plus key in September of 2011, the Pro key just about a year later. This one's about 3/4 full . . . working on a second server now. Thanks LimeTech, for building and supporting a great data storage solution.
  17. Yep, I could do that. I'm just trying to avoid a week-long copy just to offload one of the eight RAID5 enclosures. Regarding RFS versus XFS, there are a couple of things I've noticed about RFS that could be considered objectionable. I realize that any filesystem design will make compromises, and don't fault anyone for what I'm seeing, mainly I'm interested to hear if perhaps XFS handles the same situations better. That would help me decide to go with XFS for the new server. Problem 1 is slow deletion of largish files (~10GB). I can deal with that much better than the next problem. Problem 2 is very slow initial creation of a largish file when the containing drive is nearly full. Maybe this is due to the large "disk" size I'm using (8TB-12TB); I'd be interested to know if others are also seeing this problem. It basically forces me to utilize the cache drive rather than write straight to the array. The copies from the cache drive are still slow, but there isn't the timeout problem that I experience when copying from Windows. More detail on the above problem: 'du --si' shows '70k' for files being created slowly due to this problem, often for several minutes, before the file size starts to go up normally. I imagine the OS is searching for free blocks into which to write the file, but don't know enough about how RFS works to be sure that's what's causing the slowdown.
  18. Ah did not know that, although I had seen talk around here of the 6.0 release. I'd still be able to mount the old drives as resierfs I'm assuming. Is there a resource that lays out the pros/cons of the two filesystems? I'm sure I could do some internet searching but maybe you know of a more convenient, concise resource. Thanks bjp999!
  19. Thanks everyone for the suggestions. Keep in mind that I don't want to take the old server apart, at most I'd be downsizing it. So I need to *copy* the data as opposed to just moving it. That precludes installing the old drives in the new server, at least until some get freed up due to downsizing. I had considered the 'mounting the old drive as cache and letting the mover move it' but again I want a copy; I need to mount the drive(s) as Read Only, besides not wanting the original files erased after the copy, I don't want *any* writing done to the old drive(s) as that will break the parity correctness when I put the drive back in the old server. There are a couple of things I didn't mention about the old server, that rule out certain approaches to this migration. I have no spare PCI slots due to using them all for eSATA controllers; it's a 2U rackmount chassis and it only has 3 PCI slots. Once I downsize to only needing two eSATA controllers, I can and probably will try a GBe NIC. Still, with PCI bus speeds, I can only achieve 1/4 or so of the full Gb/s speed, which of course is still much better than 100Mb/s. This last one is a biggie: the "drive size" on the old server is much *larger* than what it will be on the new server. Yeah, sounds backwards. I'll be using 3TB, 4TB, and perhaps 6TB drives on the new server. On the old server they are either 8TB or 12TB. That's because I'm using RAID5 enclosures to create one logical "drive" out of five physical drives . . . the 64TB server only has 8 "drives", one 12TB parity drive and a mix of 8TB and 12TB data drives. Again I thank everyone for their suggestions and apologize for not providing more of the configuration details in my first post. Once I've tried the approach I have in mind I'll have a better idea how well it will work going forward. I mainly wanted to ensure the safety of the data I'd be subjecting to this process, as the new server has yet to be proven reliable, until then I want the data duplicated.
  20. Greeting fellow unRAID enthusiasts. This is my first forum post, although I've been using unRAID for many years. My first unRAID server has grown too large (64TB) for the outdated hardware I built it on (PCI slots), and I'm in the process of building a second server. I'll probably keep them both, but somewhat downsize the old server and relegate it to longer-term storage of less-frequently accessed data. My question regards copying data between the servers. I don't want to use the network because I only have 100Mb/s on the old server. At that rate each terabyte will take about a day to copy. (I searched for things like "migrate" but most results seemed to be about upgrading server versions or moving all drives from one host to another.) My idea is to mount the drive(s) containing the data to be copied onto the new server (read-only!) and 'cp -r' on the command line from the mounted drive(s) to the new array. I've already confirmed that I can do the required mounting on the new server, via eSATA, of a test drive that was once in an unRAID array. (The old server *was* version 5.0.beta12 until this evening when I upgraded it to 5.0.5, the same version as the new server.) I just want to make sure I'm not missing something that would make the proposed approach unworkable. Thanks for your help, and a big thanks to LimeTech for building and maintaining such a quality system!