DocBlock

Members
  • Posts

    81
  • Joined

  • Last visited

Everything posted by DocBlock

  1. It's also possible that your USB ports aren't all provided by the same chipset; for example, 8 may be via the onboard USB chip, and 4 more may be an extra chip, or the like. Are any of them USB3.0? I'm gonna guess that one of the two chipsets providing USB connectivity on your board isn't supported by the unRAID kernel, and so you only see the ones that are.
  2. I recently moved to a Netgear WNR3500L-100NAS which is their "open source" router. I flashed DD-WRT onto it and have been very happy. No external antenna ports though, if you are looking for that. There's plenty to read about the Netgear open source routers at http://www.myopenrouter.com.
  3. I did a full shutdown of everything in my house (vacation for 2 weeks) and the problem persists after returning home. Any ideas? I'm stuck, and it's really annoying not to be able to use a disk.
  4. DocBlock

    Rack Mount Case?

    Some rackmount cases are spendy, and the Norco 4220 is popular because for the feature set, it's actually pretty cheap. Depending on how many drives you want to run, rackmount cases can be found under $100. I bought a few of these to rack up some other servers in my house (not unRAID): http://www.newegg.com/Product/Product.aspx?Item=N82E16811192047 They have enough room for one 5-in-3 but there are similar case designs that allow for two 5-in-3's. I would tend to think that ventilation of a tower on it's side would be OK if you overdid the airflow, but that would definitely vary a lot on the components used.
  5. mifronte is correct. You only need one molex connector per backplane.
  6. Here's the exports file, as requested. I don't see anything out of kilter, sadly. root@leng:~# cat /etc/exports # See exports(5) for a description. # This file contains a list of all directories exported to other computers. # It is used by rpc.nfsd and rpc.mountd. "/mnt/disk1" -async,no_subtree_check,anongid=0,anonuid=0,all_squash *(rw,no_root_squash) "/mnt/disk2" -async,no_subtree_check,anongid=0,anonuid=0,all_squash *(rw,no_root_squash) "/mnt/disk3" -async,no_subtree_check,anongid=0,anonuid=0,all_squash *(rw,no_root_squash) "/mnt/disk4" -async,no_subtree_check,anongid=0,anonuid=0,all_squash *(rw,no_root_squash) "/mnt/disk5" -async,no_subtree_check,anongid=0,anonuid=0,all_squash *(rw,no_root_squash) "/mnt/disk6" -async,no_subtree_check,anongid=0,anonuid=0,all_squash *(rw,no_root_squash) "/mnt/disk7" -async,no_subtree_check,anongid=0,anonuid=0,all_squash *(rw,no_root_squash)
  7. I'm having a strange issue with NFS and I am surely overlooking something obvious, even though I've been using NFS for 20 years. So here goes: I'm trying to mount /mnt/disk2 onto one of my machines. In order to identify each disk, I've made the following files: root@leng:/mnt# ls disk*/I_am* disk1/I_am_disk1 disk2/I_am_disk2 disk3/I_am_disk3 disk4/I_am_disk4 disk5/I_am_disk5 disk6/I_am_disk6 disk7/I_am_disk7 My NFS exports, as automatically built by unRAID: root@leng:/mnt# exportfs -v /mnt/disk1 <world>(rw,async,wdelay,no_root_squash,all_squash,no_subtree_check,anonuid=0,anongid=0) /mnt/disk2 <world>(rw,async,wdelay,no_root_squash,all_squash,no_subtree_check,anonuid=0,anongid=0) /mnt/disk3 <world>(rw,async,wdelay,no_root_squash,all_squash,no_subtree_check,anonuid=0,anongid=0) /mnt/disk4 <world>(rw,async,wdelay,no_root_squash,all_squash,no_subtree_check,anonuid=0,anongid=0) /mnt/disk5 <world>(rw,async,wdelay,no_root_squash,all_squash,no_subtree_check,anonuid=0,anongid=0) /mnt/disk6 <world>(rw,async,wdelay,no_root_squash,all_squash,no_subtree_check,anonuid=0,anongid=0) /mnt/disk7 <world>(rw,async,wdelay,no_root_squash,all_squash,no_subtree_check,anonuid=0,anongid=0) On the machine where I am mounting the disks: [root@ws1 mnt]$ mount | grep disk leng:/mnt/disk1 on /mnt/disk1 type nfs (rw,addr=10.8.2.5) leng:/mnt/disk2 on /mnt/disk2 type nfs (rw,addr=10.8.2.5) leng:/mnt/disk3 on /mnt/disk3 type nfs (rw,addr=10.8.2.5) leng:/mnt/disk4 on /mnt/disk4 type nfs (rw,addr=10.8.2.5) leng:/mnt/disk5 on /mnt/disk5 type nfs (rw,addr=10.8.2.5) leng:/mnt/disk6 on /mnt/disk6 type nfs (rw,addr=10.8.2.5) leng:/mnt/disk7 on /mnt/disk7 type nfs (rw,addr=10.8.2.5) But look at the ID files I placed on each disk: [root@ws1 mnt]$ ls disk*/I_am* disk1/I_am_disk1 disk2/I_am_disk1 disk3/I_am_disk3 disk4/I_am_disk4 disk5/I_am_disk5 disk6/I_am_disk6 disk7/I_am_disk7 Note that disk2 is disk1. No matter what I do, this keeps happening, I can't seem to access disk2 by NFS. Now mind you, I did mount disk2 via NFS when I first put all my data onto it. I have tried rebooting both the unRAID server and this Linux workstation, to no avail. What am I missing?
  8. The AOC-SASLP-MV8 expansion cards are popular at the moment, and their prices are right, too. That case supports a total of 20 3.5" disks and 2 2.5" disks, along with a slim CD drive slot. Assuming all SATA, that's 23 ports. The SASLP cards provide 8 each, meaning you'd need 7 SATA to fully "deck out" that chassis. Mind you, CD's aren't supported in unRAID, so that drops it to 22. And unRAID only supports 20 disks in the array, so you may not need the extra 2.5 drive bay slots either, bringing the necessary onboard ports down to 4. You will need two PCIe x4 slots for the SASLP controllers, so that's really what you end up with: 2 PCIe x4 4 onboard SATA Anything else is whatever you like. I picked up a DFI LanParty board with 3 PCIe x16 slots as an open box for $90 and added a cheap celeron E3300 to it. Things to consider are onboard video and onboard gigabit ethernet that's supported under unRAID. I just went through this same thing. There are plenty of boards out there, and not all of them are a fortune. I too would like to see some more modern choices in the hardware list. While I have the above board, it hasn't been transplanted yet. I'm currently using a very old Asus A8N-SLI with no problems at all, and very good (>50MB/s) performance. So don't be afraid to consider something a bit older.
  9. My OEM cards came with both full and half height backplanes, though not attached to the cards. I had to choose the proper one and attach it myself.
  10. This smells of hardware, either the drive, cable, or controller. Can you swap any of it?
  11. I've been running this board/card combo and I'm not having any trouble at all. I've only got 9 drives connected, but separated out onto the various ports on the various controllers. I last rebooted on May 25, in order to add some more disks to the array, but that's the only reason I've had to down the system. So yes, I'd say it's running without errors, aside from the standard HDIO_GET_ENTITY errors that happen due to the immature driver for this card. That happens on any motherboard though, and doesn't seem to cause anyone problems.
  12. I got mine here: http://cgi.ebay.com/ws/eBayISAPI.dll?ViewItem&item=300360847366&ssPageName=STRK:MEWNX:IT It was a few dollars more than some vendors, but two of us unRAID forum folks have ordered from him and the contents were exactly as described.
  13. Looks like they charge a fortune for shipping if the order is under $50. Adding an extra drive tray ($5) to the order caused shipping to be $3.
  14. That same cable direct from Norco is a bit cheaper (surprisingly): http://www.ipcdirect.net/servlet/Detail?no=216 They also have the reverse breakout for going to the motherboard at $12: http://www.ipcdirect.net/servlet/Detail?no=215
  15. I noticed that rtorrent doesn't apply settings immediately for the port(s) to listen on. By default, it's a range of 200ish ports, and it picks one at random. Are you using that default set, and are you sure you have the entire range forwarded? I changed the configuration file on disk to a single port and forwarded it in, and it's working exactly as I'd expect, in terms of ratios.
  16. Yep. SFF-8087 to SFF-8087 is what you need. I bought mine direct from Norco, their price was reasonable, and I picked up a spare drive cage while I was at it, in case I break one at some point.
  17. Thanks. Got the new disks added no sweat. For what it's worth, I precleared 5 disks at the same time, and it worked like a champ. Two were the onboard NForce 4 SATA ports, and 3 were via the MV8. The three disks on the MV8 were running ~80-90MB/s where the two on the NF4 were running 90-100MB/s. I was hoping to post hard numbers when the runs completed, but alas, I lost those terminals, and didn't want to do another 20 hours of preclear just to get that data.
  18. I was preclearing 5 1.5TB disks and was 85% done with step 10 of the preclear script, when my desktop machine (from which I was ssh'd) locked up, causing the processes to abort. So, I guess what I'm wondering is, whether I can resume (either partially through or at the beginning of) stage 10 without the other steps, and if not, are the disks "officially precleared" aside from verifying the full zero set? These are disks that had been in a RAID-10 for about 6 months, so I have a fair amount of faith in the drive health. Up until the clears were prematurely aborted, there had been no errors at all.
  19. Sadly, that looks to only be the Windows drivers, and doesn't include updated firmware.
  20. This kind of stuff is why I shouldn't read the deals forum.
  21. I should add that the uplink port you're looking for is pretty much outdated. Older switches had this. It was the "crossed" port, which prevented you needing a crossover cable. Uplink ports were never necessary, provided you had the proper cabling.
  22. Think of it like a power strip. You'll plug port #1 "upstream" and all the other ports will magically figure out how to tap into that ability. Ethernet is designed as a sort of "client/server" architecture. Routers and switches are servers, and computers are clients. When you want to connect a PC to a switch, you need a regular cable. When you want to connect a PC to a PC, or a switch to a switch, you need a crossover cable. However, starting 5-7 years ago, companies started making "autodetecting" ethernet ports. Today, you'd have a hard time finding a switch that isn't auto detecting. So crossover cables are no longer necessary, as the switch will automatically cross the port instead, if necessary. This feature, if you're concerned, is usually called "Auto MDI-X" or "MDI-I/MDI-X" but I wouldn't sweat it. It's become so common that it might not even be on many switches feature lists anymore. The one thing I would look for in a switch is the ability to use jumbo frames. This isn't something you'd take advantage of today, but jumbo frames allow for more efficient gigabit throughput. Someday, when your network is 100% pure gigabit, you could consider turning it on. Or not. But either way, jumbo frame support shouldn't add to the cost of the switch. It's just something to look for while you're shopping.