jortan

Members
  • Posts

    287
  • Joined

  • Last visited

Everything posted by jortan

  1. I would write to the share which would utilise the cache for a couple of reasons: 1) It might prevent additional disk/s from spinning up (unless the disk/s you're writing to are also being seeded from) 2) It prevents some fragmentation of downloaded files as the full contiguous files are written to your disks each night. But to each their own!
  2. Someone correct me if I'm wrong, but I think those 4 PCI slots share a maximum bandwidth of 132MB/sec. This isn't a problem if you're only writing to one or two drives at a time (as you might be with WHS), but as you add more drives the system is going to get slower and slower. You can alleviate this somewhat with a cache drive, but doing parity checks/rebuilds is going to take forever, at 16 drives, you're looking at a maximum theoretical speed of 8.25MB/sec.
  3. At the console of your unRAID machine, run the command: ethtool eth0 And tell us the entries for "Speed" and "Duplex"
  4. I'm not a Mac person but here's some info to tide you over until one shows up here. 1) I've never used Transmission, but apparently some people are running it on unRAID: http://lime-technology.com/forum/index.php?topic=8736 2) Same goes for Plex: http://www.lime-technology.com/wiki/index.php?title=Plex_for_UnRAID_FAQ 3) Yes. By default you'll be using SMB (Samba) shares. I believe there is AFP (Apple File Protocol) support now but I'm not up with the status on this. You can use TimeMachine to an SMB share though: http://lime-technology.com/forum/index.php?topic=5180.msg47986#msg47986 4) As far as your backup strategy, is your "copy of everything" kept at home? If so, what if you have a fire? You've lost everything. For really important data I recommend an offsite backup like crashplan, which can be set up to run inside unRAID. As for the rest of your data (from Transmission), is it really that important/irreplaceable? Dual parity is on the roadmap, but no word on when this will be implemented: http://download.lime-technology.com/develop/infusions/aw_todo/task.php?id=23 Things are constantly changing so there's no book, but plenty of "wiki" information: http://www.lime-technology.com/wiki/index.php?title=Unofficial_Documentation Much of the additional functionality is being implemented by users in plugin format to make it easier for people not familiar with linux command line. http://lime-technology.com/forum/index.php?topic=2595.0 The unRAID community is friendly and helpful so as long as you're willing to approach this with an open mind and a willingness to learn, you should be fine.
  5. make a new folder on an unaffected disk: mkdir /mnt/disk4/[sharename]/recoveredfiles Then to copy recursively all files and folders from one path to another: cp -R [source] [destination] The R must be a capital. If you're inside the lost+found folder you can use * as the source
  6. 450/850 watt are maximum power ratings not a reflection on how much power they consume (it depends on the devices attached to them) Two unRAID systems will definitely use more power than just one. This why you see some people doing the opposite of what you suggest - virtualising unRAID and other services in to a single box using VMWare ESXi.
  7. The cache drive is designed to operate entirely transparently. I suppose you could use the mover script but the intended operation is to schedule the mover to occur once per night (the default setting, it runs at 3:40am) or once per week, and not have to think about it at all.
  8. Parity calculation does not include the cache drive, but as long as the drive is not written to significantly, most or all of the file should be recoverable.
  9. The syslog in the attached zip file is showing up as zero bytes.
  10. This might be a silly question but, did you stop the array first? Just make sure you don't write anything to the array (more specifically the cache drive)
  11. It doesn't matter if the ports are used normally. My limited research is this: Controller: 2 x SFF8087 ports Expander: 6 x SFF8087 ports The normal configuration is to connect one cable SFF8087 cable between the controller and the expander - losing one port on each leaves you with one port on the controller and five ports on the expander - six SFF8087 ports for a total of 24 SATA connections. The problem is the 20 drives on the expander are all sharing the bandwidth of that one SFF8087 port on the controller. At 3Gb/sec per sata port, sharing between five sata ports is 75MB/sec each? Which sounds OK for current green drives, but maximum theoretical speeds never seem to match actual real-world speeds... It's possible to use both controller ports to supply more bandwidth to the expander, but that only leaves you with 16 SATA connections - you'd be better off (and have more money in the bank) with two BR10i's. Probably worth investing a little bit more to do it right as fade23 has done. 3TB+ support is critical to me though as I'm hoping this next build will last me for a good 7-10 years and I'm guessing 4TB drives will be cost effective in 2013!
  12. >>A question about drives: if the drives have been running in one orientation, e.g., horizontal, is there any risk if they are changed to the other orientation - i.e, in this example, vertical? This won't be any problem - just try to avoid moving 3.5" drives while they are running (but I suspect you already know that)
  13. Thanks for posting this, I'll be setting up a new esxi/unraid build later this year and this is exactly what I'm aiming for. Was planning to look at the new bulldozer CPUs but these entry level Xeons look great - I might end up duplicating your build almost exactly! I already picked up a BR10i card on eBay, I wonder if this will work OK with a SAS expander or if there would be performance issues (3Gb/sec vs 6Gb/sec) You might be interested to know that the latest unraid beta supports 3TB drives, I did a bit of research and your controller supports 3TB but I'm not sure about the expander. Assuming it does or will with a firmware update, you might be able to bump up those maximum storage figures by 50% soon - probably 100% by this time next year! Do you find any performance issues running all your guest operating systems on a laptop drive? edit: Forgot to ask - are drive temps and standby working?
  14. As I understand it, this card ships with some IBM servers - in some cases a different disk controller is required and so the BR10i is surplus to requirements - this is why you'll see a good number of effectively new BR10i's on ebay at a keen price. "returns from eBay vendors" as in the card was faulty? You could buy elsewhere but you will probably pay more - I don't see the risk of DOA as being much higher on ebay for these cards... ps: Try to get the bracket included, apparently it can be difficult to source by itself as evidenced by the price gouging for the bracket alone on ebay.
  15. 2) I prefer VMDirectPath for pass the Controller Card though. 4) Spindown and temp reading work perfectly on controllers passed via VMDirectPath This is what puzzles me - I've read that the br10i card works with VMDirectPath, but if 4) is the case why are there reports that spindown and temp readings don't work with the br10i?.... Confused.
  16. Strange, I've read that the limit is two (with the exception of pci devices on the same bus counting as one device) in many places. Including somewhere on this forum I'm sure... I found somewhere else today describing a limit of 6 devices.. Maybe it was increased in very recent versions of ESXi?
  17. Interesting idea but I suspect a lot of people interested in ESXi would rather keep some slots available for passing through tv tuners, video cards etc, rather than using 6 ports just for sata controllers. Also I think there is a limit of two devices passed through to each VM?
  18. jortan

    Norco 4224 Thread

    How strange. I wonder if the result would be the same if you connected the second set of molexes by themselves (ie. is there some problem with the first set) Then again, if it ain't broke...
  19. You might want put labels on your drives with the last 4 digits of the drive serial number - somewhere that's visible when the drive is installed. This way you can match up physical drives with the list of drives in the unRAID interface.
  20. This should answer any questions you might have (or hadn't thought of yet) about the cache drive feature: http://www.lime-technology.com/wiki/index.php?title=FAQ#Cache_Drive
  21. It doesn't replace ports on your motherboard, it provides two additional ports so you can have a higher total number of drives.
  22. Yes, and as the guide strongly points out, make sure you DON'T assign these disks in unRAID until all your data is safely on the array and parity is synced.
  23. If you feel comfortable with command line: http://lime-technology.com/wiki/index.php?title=Copy_files_from_a_NTFS_drive
  24. You could spin them down (or wait until they should have spun down automatically) and then try to access a file over the network. There should be a ~5 second+ delay while the drive/s spin up?