dirtysanchez

Members
  • Posts

    949
  • Joined

  • Last visited

Everything posted by dirtysanchez

  1. Define "adoption fails outright in the first place"? Unless your DHCP server is handing out the controller address to the AP when it leases an address, which it's not unless you specifically configured it to do so, you have to tell the AP where the controller is. That is done with the set-inform command. Why Ubiquiti made it so that you need to do it twice is beyond me, but it needs to be done once, then adopt in controller, then set-inform needs to be run again before the AP will provision. For the life of me I can't find official UBNT documentation staring so, but here's a website which shows it needing to be done twice. There are also numerous posts in the UBNT forums regarding the issue. http://helpdesk.maytechgroup.com/support/solutions/articles/3000008280-how-to-move-a-ubiquiti-unifi-access-point-to-a-new-controller-v2-x-. As to why your first AP worked without doing so but your 2nd didn't is definitely strange. I only have a single AP but I did have to set-inform twice to get it initially set up.
  2. You actually have to run the set-inform command twice, once before and once after adoption. Doing it the second time is what solved your problem.
  3. If you can get to it by IP but not by name you are having a DNS issue. Does the computer in question have a hard coded IP address, or leasing an address via DHCP? At a command line type "ipconfig /all" and "nslookup tower" (without the quotes) and post the output.
  4. As regards power consumption, my Ivy Bridge i3 based system idles around 30W. Since yours will be Haswell it should idle slightly lower, all other things being equal. Link to my system is in sig.
  5. You only lose quality with Plex if you are playing back the content on a client that is not capable of direct play, and therefore Plex is transcoding the stream down to a quality the client can display. If the client is capable of direct play, Plex simply streams the source file to the client with no intervention. If you were getting subpar quality with Plex, the issue wasn't Plex in and of itself, it was the clients you were attempting to play the content on.
  6. I have had plexWatch running for many months now, but all of a sudden the docker will no longer start. Reading package lists... Building dependency tree... Reading state information... The following extra packages will be installed: git-man liberror-perl patch rsync Suggested packages: gettext-base git-daemon-run git-daemon-sysvinit git-doc git-el git-email git-gui gitk gitweb git-arch git-bzr git-cvs git-mediawiki git-svn ed diffutils-doc The following NEW packages will be installed: git git-man liberror-perl patch rsync 0 upgraded, 5 newly installed, 0 to remove and 77 not upgraded. Need to get 84.3 kB/3713 kB of archives. After this operation, 22.5 MB of additional disk space will be used. Err http://archive.ubuntu.com/ubuntu/ trusty-updates/main patch amd64 2.7.1-4ubuntu2 404 Not Found [iP: 91.189.92.201 80] I noticed this same error earlier in this thread, but it was related to sickrage. Any help would be appreciated. EDIT: Disregard. Further research showed it might be related to EDGE=1 variable. Apparently I had applied EDGE=1 a long time ago when originally trying to get plexWatch to work and it has been running that way ever since. I removed the EDGE=1 variable and the docker started right up.
  7. I have had plexWatch running for many months now, but all of a sudden the docker will no longer start. I know this is needo's docker, so I'm not sure if you support it, or if it is supported at all. In any case, here is the error from the docker log when you attempt to start the docker. Reading package lists... Building dependency tree... Reading state information... The following extra packages will be installed: git-man liberror-perl patch rsync Suggested packages: gettext-base git-daemon-run git-daemon-sysvinit git-doc git-el git-email git-gui gitk gitweb git-arch git-bzr git-cvs git-mediawiki git-svn ed diffutils-doc The following NEW packages will be installed: git git-man liberror-perl patch rsync 0 upgraded, 5 newly installed, 0 to remove and 77 not upgraded. Need to get 84.3 kB/3713 kB of archives. After this operation, 22.5 MB of additional disk space will be used. Err http://archive.ubuntu.com/ubuntu/ trusty-updates/main patch amd64 2.7.1-4ubuntu2 404 Not Found [iP: 91.189.92.201 80] Any help would be appreciated. EDIT: I found the needo thread, I have posted for support in there. Sorry for the confusion.
  8. Have never used nor had the need for them, but can see how they are beneficial in certain situations.
  9. My system (in sig) idles at roughly 33W at the wall as measured by a Kill-a-Watt.
  10. Just looked up the mobo and it is indeed PCIe 2.0. Upgrading the card should get you back to the parity check speeds you were seeing before you fully populated the existing card. But also as tr0910 says, a parity check can only move as fast as the slowest drive. The 500GB drive in your array is most certainly pulling down your average speed on parity checks. If you were to replace the card and remove the 500GB drive from the array, you average parity check speeds would likely increase to around 125MB/s or so. As an example my array is 5 of the Seagate ST3000DM001 drives and my average parity check speed is roughly 155MB/s. EDIT: in fact you could probably get your speeds up to around 125ish without even replacing the card. If you removed the 500GB drive, put your fastest 6 drives on the mobo SATA ports, and your slowest 6 drives on the card, that may do it. It may not get you to 125 but it should certainly be faster than it is now.
  11. The short answer is yes, the card is overloaded. Your current card is PCIe v1, and so limited to 250MB/s per PCIe lane. It is an x4 card so has a theoretical bandwidth limit of 1GB/s. If the card has 8 drives connected, that means a maximum throughput of 125MB/s per drive. Most current gen spinners are capable of faster than 125MB/s, so on the faster outer sectors of the drives (early in the parity check) the card is not capable of transferring the data to the PCIe bus as fast as 8 drives can simultaneously read. The new card you link to is both x8 and PCIe 2.0. PCIe 2.0 is capable of 500MB/s per lane, so x8 gives you 4GB/s throughput, which is plenty enough for 8 spinners. In fact, it's almost enough for 8 SSD's. So assuming your motherboard slots are PCIe 2.0, then yes, the new card would remove the bottleneck.
  12. Even the fastest current gen platter-based spinners top out at around 200MB/s on the outer (fastest) tracks. The existing SATA III interface is capable of 600MB/s, fully 3 times faster than even the fastest current spinners can achieve. So the drives themselves have a long way to go before they can even max out the existing interface. Of course the much higher areal density of the mythical 60TB drive would likely have transfer speeds in excess of SATA III, so a faster interface would be required to take advantage of the speed, but the drive itself is far behind what the interface is capable of.
  13. Yes, I've heard nothing but good things about the EdgeMax router. Thank you for mentioning that the EdgeMax router is a prosumer product, as anyone considering purchasing one needs to understand it requires a good bit more technical knowledge than your average point-and-click consumer router. Luckily, I work in IT for a living and spent roughly 8 years specifically in Network Engineering/Administration at a previous employer (so I'll be fine), but most people without a solid understanding of networking and command line configuration should probably steer clear. Now if only I could get that kind of high-speed connection (I'm assuming you have gigabit fiber) here in the broadband-backwards USA! Any insights into the good stuff that was improved in the latest version?
  14. Count me as another Ubituiti products fan. Recently installed a UniFi AP in the house and love the range as well as the incredible manageability and feature set. Looking to replace my Asus router with an EdgeMax EdgeLite router soon. And running the UniFi controller in a docker just makes it all that much better.
  15. i updated to the v6 series at beta14b (which I would consider pretty late in the beta series), and the default cache drive format was btrfs. I hadn't even checked the setting and was surprised after I formatted a new cache drive that it was btrfs. I promptly changed it to xfs and reformatted.
  16. Do you mean the drive in File Explorer is still 20GB, or in disk management? If you expand the underlying virtual disk, Windows does not automatically expand the partition. You need to do it manually in disk management.
  17. I would also be very interested in this.
  18. But you likely wouldn't. Since the HAMR technology achieves the massive capacity increases due to much higher areal density, read/write speeds would also increase accordingly due to the increased areal density, all other things being equal (which may or may not be the case). So assuming all other things being equal, with a roughly 10x increase in areal density you could expect a roughly 10x increase in read/write speed at the same RPM. That said, transferring that data to the bus today would be limited by SATA speeds as SATA III is roughly only 4x faster than current gen spinners. A PCIe interface (or some other new interface) would be needed to get anywhere near what one of these future 60TB drives could likely like deliver as far as transfer speeds. Point being, it's entirely possible that one of these not-so-far-in-the-future 60TB drives could have higher throughput rates than current SSDs.
  19. Yes, that would be perfectly fine. No spinner HD can even come close to full SATA II speeds, let alone SATA III.
  20. K does not mean no VT-d, it means the clock multiplier is unlocked, meaning the CPU can be over-clocked. That said, in the past, K-series CPU's generally DID NOT support VT-d. It appears Intel is changing their tune recently as there seem to now be K-series CPU's that DO support VT-d. As always, the Intel ARK website is your friend when determining what features a given CPU supports.
  21. Yet another vote for Asus routers. My RT-N66U is still running strong after roughly 3 years. Great hardware and great feature set. I am also using Merlin firmware. It's the only consumer-grade router I've ever had that doesn't require rebooting every week or two, and I've owned over a dozen. I just looked and the current uptime on my router is 248 days.
  22. FWIW I was unable to get the controller to find my AP also. From what I understand for the AP to automatically find the controller you either need to use the discovery software (like you have on your Mac), or set DHCP option 43 with the inform address of your controller. Once the AP has the inform address and contacts the controller and is provisioned, all works great. I did it the manual way. I ssh'd into the AP and set the inform address manually via the CLI and it showed up immediately in the controller. Also if your AP was already provisioned on another controller (your Mac) you would have had to reset the AP to defaults or "forget" the AP from the controller to find and provision it on another controller. An AP can only be managed by a single controller.