ysss

Members
  • Posts

    273
  • Joined

  • Last visited

Everything posted by ysss

  1. Perhaps it can give a X-days reprieve by allowing it to downgrade to trial\temporary license before it can reauthorize? I get you... I was gonna say that nowadays just about everyone has access to cellular internet and that most people would already find alternative ways to access the net if they experience outage for more than hours... But then again, personal\smartphone connectivity is very much different than a storage server's.. not to mention that some of these servers found their use due to the unavailability\unreliability of internet connection in the first place. So yeah, point taken. Anyway, this is just a feature request.... and I think a (reasonable) online licensing model today is much more acceptable than it was a few years ago.. and things will continue to move toward that end of the spectrum (everything is hyper connected).
  2. They could leave the current (offline) licensing alone with v6.9 and just switch to online from v6.10 and onward... What are the argument for offline licensing these days? I mean, you still need to go online to get updates, addons, do online backups and whatnot. I never thought of hosting pfsense\firewall on a VM, but for my case... my unraid server seems to be the most 'available', since it has dual PSU, ECC ram, a higher quality supermicro motherboard, etc.... so in that sense, I'm leaning toward putting mission critical stuff on the best hardware that I own...
  3. I see, now I understand more. I don't mind a different kind of licensing scheme to enable this feature, even one at higher price and with online activation... if you can think of a good way to implement it
  4. Found out what was causing the problem... The ARP table on my managed switch has that particular ip set as static for another mac address (previous server)... after I cleared that mac address entry, everything is working as expected.
  5. I'm on v6.9.2 ... and when I set a static IP from the web interface, the broadcast address is set to 0.0.0.0... I have to go to console and issue a manual ifconfig eth0 broadcast x.x.x.255 to fix things, but where can I define the broadcast address from the web interface? to add some info: I set my static ip as : 10.10.0.100, /24 subnet gateway as 10.10.0.1 Logically, it should calculate the broadcast address to be 10.10.0.255, but for some reason, it sets it to 0.0.0.0 I didn't realize it at first, and the machine just lost network connectivity (plex not working, app updates just called up the last cached version, etc)... until I went to console and checked (and corrected) with ifconfig. Any thoughts and suggestions welcomed. Thanks
  6. With multiple arrays, I hope they let us turn on/off arrays individually.. I'd make a simple RAID-1 for the mission critical VMs that runs 24/7 that I don't mess around with.
  7. Nice. I'm also ordering a few things to quiet my main 846... (those green caged supermicro fans and the quiet (ps or qs series) psu.
  8. +1 to VMs that can stay up during array maintenance. Some disk maintenance can take awhile before the array becomes available again, that's the reason I'm still keeping my home automation VM on a separate VMWare machine.
  9. Which motherboard\cpu did you pair with that case if you don't mind me asking?
  10. Hi guys, Will the 'new config' clear up any other settings, other than the disk assignments + voiding parity? I worry about my users list, shares settings, dockers settings and also cache disk settings/assignments. Thanks
  11. Is there significant change for v6? Also, does clicking 'New Config' clears up the users list, shares list and other settings too? Edit: Please disregard, found the answer here: https://lime-technology.com/forum/index.php?topic=47504.msg454857#msg454857
  12. Nope. Ended up installing NZBGet on a spare windows server I have, and I haven't tried it back on the unraid machine. I'm put off using vm on unraid, since it doesn't even support VLANs...
  13. I wonder if this is in response to WD's announcement of 8TB helium drives... A pair of "8TB WD Red he" drives sells for $549 in the My Book 16tb : http://store.wdc.com/store/wdus/en_US/pd/ThemeID.21986300/productID.335134800/parid.13092300/catid.13092800/categoryID.67972900/WDBLWE0160JCH-NESN/My_Book_Duo/External_Storage
  14. Opentoe: have you looked into Blue Iris software? For $50, it lets you mix and match up to 64 cameras and has loads and loads of features. I use mine with mobotix, hikvision and foscam cameras. (Soon to be adding + testing Areconts). I also really like their mobile client (android and ios) which lets you playback recordings too.
  15. For those looking for 6TB WD Reds, check out WD "My Book Duo desktop 12TB" which comes with a pair of 6TB WD Reds for $445 (newegg and amazon). It has the same 3 years warranty, and priced lower than a pair of WD Reds, and you get the USB 3.0 raid-capable enclosure for free http://www.amazon.com/12TB-Desktop-External-Drive-WDBLWE0120JCH-NESN/dp/B00LEF28CI http://www.newegg.com/Product/Product.aspx?Item=9SIA4P02F77130&cm_re=Wd_duo_12tb-_-22-236-731-_-Product
  16. Can you set different ip's (multihomed) for multiple NICs? Ideally, i've been hoping for vlan support, but i'll settle for this in the meanwhile...
  17. Hi, thanks for the idea. No, I don't run unraid virtualized and I don't have any KVM running, just plain unraid + some plugins + some dockers. Sorry, I haven't made a sig... my hardware is something like this: Xeon E3-1245-v3, supermicro x10sl7-f, 24gb, 21 drives (4tb and 5tb), v6.1.5, a pair of btrfs cache drives. Edit: lots of clues down here...
  18. Guys, I'm getting really crappy network performance from my docker downloaders (SABnzbd, NZBGet)... I had SABnzbd working well on unraid v6.0.x, saturating my 50mbps cable connection... then I upgraded to v6.1.3 and the download speed was cut down by +95% down to 2-3mbps. I tried both SABnzbd and NZBget and they both exhibit the same performance. I'm now on latest unraid (v6.1.5) and I still get the same problem. I can verify that it's not my internet connection problem, since I can still attain full speed by running NZBGet on a win server 2011 box on the same network. Looking at NZBGet log, I get a lot of these: iptables output: ifconfig: I have tried both bridge and host network settings for this docker; right now I'm leaving it as 'host' networking. What could be causing my issues? Is there a connection timeout settings somewhere I that I need to change? What method should I use to diagnose the issue? Thanks in advance, -ysss edit: (I've just noticed dropped packets on br0)... edit 2: noticed the mtu sizes... I've disabled jumbo packet support on my switch and MTUs on all interfaces back to 1500. Still no dice...
  19. On my system its harmless (annoying since it shows on any putty session, but harmless). But, on a quick and dirty google search, it looks like it could be docker related NFS related within an LXC container (docker IIRC leverages LXC containers) I am curious if when the message appears for you if that's when one of your containers crash. (Mine never have at all) I don't use NFS on any of my dockers... I'm not sure what happened with the docker that crashed, it started because I was getting 1/10th of my regular speed on SABnzbd (hurricane's?) and I went to reactivate my long-dormant NZBget container. It needed an update, so I ran that... well that thing ran for eternity and crashed the web interface, and when I logged into the server via ssh, I saw the dreaded errors in the log and teh system was unresponsive, so I rebooted after that. I know, it wasn't 100% conclusive, but that's what I saw...
  20. I shouldn't have upgraded to 6.1.3... I'm getting this error, some docker container could crash the whole system and SABnzbd runs super slow on this.
  21. Check out the supermicro counterpart as well. Supermicro is known to be bulletproof for servers.
  22. Did you (and at which steps) did any of the following operations: 1. Disk/parity rebuild? 2. Parity check? 3. Format, other than in steps 3 and 5?
  23. I have the 4220, I think the backplanes are the same; just missing one compared to 4224. You'll need straight SFF-8087 to SFF-8087 (SAS multilane) cables to connect the expander ports to the backplanes. If I'm not mistaken, the 4224 has 6 backplanes (4 sas/sata port per backplane). And 2 reverse breakout (sas to SFF-8087) cables to connect the 8 SAS/SATA ports on the motherboard to the SAS expander. When buying the multilane cables, take notice the length and orientation of the connectors. Some of them come with angled (L-shaped) connectors. I don't think they can be plugged all the way to the backplanes, especially the bottom one. Yes, you won't need the AOC-SASLP-MV8 anymore, since the 8 onboard ports on the X10SL7-F can be expanded to 24 ports with a SAS expander (hp or intel are the popular ones).... but, speed calculation may be in order if you want to avoid bottlenecks. As I've mentioned earlier, using 22 drives I maxed out around 105MB/sec running parity check... with my drives (4TB+) I think i should be getting a tad more and bottlenecked by the SAS bandwidth. But I may be mistaken on that, I'm fine with that speed so I haven't dug into optimizing that further.
  24. No problem. The cable is a tricky one, because usually even the problematic ones work for most of the time; but once you put a lot of load on them (parity check, etc) then they'll start flaking out.
  25. Btw, I've had bad experience with no-name break-out cables that I bought from Ebay (china supplier). Monoprice ones are okay, even though they look exactly like the no-name chinese ones; just different colors. And pay attention to the required cable length for your case... I use the hp expander for my unraid, with that mobo, in a 24-port supermicro case... right now I have 22 drives connected, parity check maxes at around 100MB/sec when all drives are spinning together.