Jump to content

ysss

Members
  • Content Count

    264
  • Joined

  • Last visited

Community Reputation

1 Neutral

About ysss

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hi guys, Will the 'new config' clear up any other settings, other than the disk assignments + voiding parity? I worry about my users list, shares settings, dockers settings and also cache disk settings/assignments. Thanks
  2. Is there significant change for v6? Also, does clicking 'New Config' clears up the users list, shares list and other settings too? Edit: Please disregard, found the answer here: https://lime-technology.com/forum/index.php?topic=47504.msg454857#msg454857
  3. Nope. Ended up installing NZBGet on a spare windows server I have, and I haven't tried it back on the unraid machine. I'm put off using vm on unraid, since it doesn't even support VLANs...
  4. I wonder if this is in response to WD's announcement of 8TB helium drives... A pair of "8TB WD Red he" drives sells for $549 in the My Book 16tb : http://store.wdc.com/store/wdus/en_US/pd/ThemeID.21986300/productID.335134800/parid.13092300/catid.13092800/categoryID.67972900/WDBLWE0160JCH-NESN/My_Book_Duo/External_Storage
  5. Opentoe: have you looked into Blue Iris software? For $50, it lets you mix and match up to 64 cameras and has loads and loads of features. I use mine with mobotix, hikvision and foscam cameras. (Soon to be adding + testing Areconts). I also really like their mobile client (android and ios) which lets you playback recordings too.
  6. For those looking for 6TB WD Reds, check out WD "My Book Duo desktop 12TB" which comes with a pair of 6TB WD Reds for $445 (newegg and amazon). It has the same 3 years warranty, and priced lower than a pair of WD Reds, and you get the USB 3.0 raid-capable enclosure for free http://www.amazon.com/12TB-Desktop-External-Drive-WDBLWE0120JCH-NESN/dp/B00LEF28CI http://www.newegg.com/Product/Product.aspx?Item=9SIA4P02F77130&cm_re=Wd_duo_12tb-_-22-236-731-_-Product
  7. Can you set different ip's (multihomed) for multiple NICs? Ideally, i've been hoping for vlan support, but i'll settle for this in the meanwhile...
  8. Hi, thanks for the idea. No, I don't run unraid virtualized and I don't have any KVM running, just plain unraid + some plugins + some dockers. Sorry, I haven't made a sig... my hardware is something like this: Xeon E3-1245-v3, supermicro x10sl7-f, 24gb, 21 drives (4tb and 5tb), v6.1.5, a pair of btrfs cache drives. Edit: lots of clues down here...
  9. Guys, I'm getting really crappy network performance from my docker downloaders (SABnzbd, NZBGet)... I had SABnzbd working well on unraid v6.0.x, saturating my 50mbps cable connection... then I upgraded to v6.1.3 and the download speed was cut down by +95% down to 2-3mbps. I tried both SABnzbd and NZBget and they both exhibit the same performance. I'm now on latest unraid (v6.1.5) and I still get the same problem. I can verify that it's not my internet connection problem, since I can still attain full speed by running NZBGet on a win server 2011 box on the same network. Looking at NZBGet log, I get a lot of these: iptables output: ifconfig: I have tried both bridge and host network settings for this docker; right now I'm leaving it as 'host' networking. What could be causing my issues? Is there a connection timeout settings somewhere I that I need to change? What method should I use to diagnose the issue? Thanks in advance, -ysss edit: (I've just noticed dropped packets on br0)... edit 2: noticed the mtu sizes... I've disabled jumbo packet support on my switch and MTUs on all interfaces back to 1500. Still no dice...
  10. On my system its harmless (annoying since it shows on any putty session, but harmless). But, on a quick and dirty google search, it looks like it could be docker related NFS related within an LXC container (docker IIRC leverages LXC containers) I am curious if when the message appears for you if that's when one of your containers crash. (Mine never have at all) I don't use NFS on any of my dockers... I'm not sure what happened with the docker that crashed, it started because I was getting 1/10th of my regular speed on SABnzbd (hurricane's?) and I went to reactivate my long-dormant NZBget container. It needed an update, so I ran that... well that thing ran for eternity and crashed the web interface, and when I logged into the server via ssh, I saw the dreaded errors in the log and teh system was unresponsive, so I rebooted after that. I know, it wasn't 100% conclusive, but that's what I saw...
  11. I shouldn't have upgraded to 6.1.3... I'm getting this error, some docker container could crash the whole system and SABnzbd runs super slow on this.
  12. Check out the supermicro counterpart as well. Supermicro is known to be bulletproof for servers.
  13. Did you (and at which steps) did any of the following operations: 1. Disk/parity rebuild? 2. Parity check? 3. Format, other than in steps 3 and 5?
  14. I have the 4220, I think the backplanes are the same; just missing one compared to 4224. You'll need straight SFF-8087 to SFF-8087 (SAS multilane) cables to connect the expander ports to the backplanes. If I'm not mistaken, the 4224 has 6 backplanes (4 sas/sata port per backplane). And 2 reverse breakout (sas to SFF-8087) cables to connect the 8 SAS/SATA ports on the motherboard to the SAS expander. When buying the multilane cables, take notice the length and orientation of the connectors. Some of them come with angled (L-shaped) connectors. I don't think they can be plugged all the way to the backplanes, especially the bottom one. Yes, you won't need the AOC-SASLP-MV8 anymore, since the 8 onboard ports on the X10SL7-F can be expanded to 24 ports with a SAS expander (hp or intel are the popular ones).... but, speed calculation may be in order if you want to avoid bottlenecks. As I've mentioned earlier, using 22 drives I maxed out around 105MB/sec running parity check... with my drives (4TB+) I think i should be getting a tad more and bottlenecked by the SAS bandwidth. But I may be mistaken on that, I'm fine with that speed so I haven't dug into optimizing that further.
  15. No problem. The cable is a tricky one, because usually even the problematic ones work for most of the time; but once you put a lot of load on them (parity check, etc) then they'll start flaking out.