Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About kingmetal

  • Rank


  • Gender
  1. Awesome. That makes sense, and it seems like potentially an acceptable risk. The enclosure will be connected to a UPS, so my primary concern is a faulty controller or powersupply. I'll need to rethink this whole setup in the future. Unless anyone thinks this is data suicide I'll proceed (as long as I can get per-disk S.M.A.R.T. reporting via eSATA that is!)
  2. Currently I have a Mediasonic HFR7-SU3S2 My eSATA card just arrived today. I'm not super worried about performance, although I believe the enclosure and the eSATA card support port multiplication - my main concern is stability and accurate per-disk S.M.A.R.T. which I'm not sure I'll get with eSATA. The enclosure might get returned in that case and I'd pick up something that this forum recommends instead. Cool, this is what I was hoping would happen. Does the array stop when this happens? My ideal situation would be: - Enclosure fails - unRAID detects this and shuts the entire array down - As long as I can re-introduce the drives to the array _somehow_ data loss is relegated to whatever was being written at the time Does rebuilding the disabled disks just involve checking the consistency of the filesystem? Or is it a full parity rebuild against whatever data is available across all the disks?
  3. I'm in the process of adding a cheap eSATA enclosure to my unRAID server. I realize this significantly increases the risk of dataloss, since the enclosure itself could easily fail / lose power / get struck by a meteor - but I'm trying to gauge the actual risk to data loss. I understand if two disks fail in my array, I will lose data across both disks, but what does unRAID do if the array loses multiple disks (up to 4 in this case) and the disks are recoverable? Is any data being written at the time lost but the array recoverable? Does the array immediately stop? I'm willing to accept the downtime / risk of losing whatever data was being written at the time if the array is recoverable - but if the power supply to the enclosure fails and I stand to lose four disks worth of data, I may have to find another solution (or just run a single disk in the enclosure). Basically: I know this is a bad idea, I'm just trying to gauge how bad of an idea it is!
  4. Thanks for the response, I'll check with my provider. I had previously asked them if they restrict any ports, but I can see now that they may have assumed I meant outbound ports. I take it that no additional configuration is needed if my provider isn't restricting incoming ports, other than making sure the Deluge incoming port matches whatever port/port range my provider supports? Thanks!
  5. I hope this hasn't been answered a million times already, but apparently the incoming port for Deluge is blocked by default (or at least it is if one uses a custom VPN, like I am). Is there a work-around for this that anyone has discovered? I've been digging through the config to try to make heads or tails of how I might make the appropriate changes to iptables myself but I'm a total novice.
  6. Mine was wonky in a similar way. Made the changes, complained on the forum and the. BAM it just worked like magic.
  7. You mean so you can reach the web Kent externally? Or so you can reach the daemon? The web client is bound to 8112 by default. If you forward requests to that port to your Deluge box you can log into the web client from outside your network. The daemon port is 58846.
  8. I had some of these symptoms until I told Plex to use a static port. I think be default it uses a random port number and attempts to use UPnP to get out to the Internet and talk to MyPlex, and the UI is confusing because the port is specified but a checkbox needs to be cleared to turn the port randomization off. This would only explain unavailability outside of your LAN, but just my 2 cents.
  9. Thanks binhex! I should have been more clear in my edit above: I actually got it to work - not sure what my original issue was, but using the account I explicitly added (kingmetal) and localhost:58846 I'm not connected to Deluge and CouchPotato is able to pause torrents when they are finished downloading and attempt to hard link / reseed. I say attempt because I am STILL getting file-not-found errors from Deluge after the torrents are moved and hard linked. I am still playing with the setup. I suspect that using the array as a download target might be part of the problem, but it also could be that CP has had it's config played with a bunch of times. Either way, I doubt any of this has to do with your Dockers as much as my specific config! Thank you so much for your incredible support.
  10. UPDATE: Nevermind! It started working with localhost:58846 and my explicit user account. Still curious what the line I'm missing from my auth file is! Hmm, I'm still having issues getting CouchPotato and DelugeVPN talking: - Updated DelugeVPN (and CouchPotato as well, for good measure, although no update was available) by doing a "Force Update". - Explicitly mapped port 58846 -Set up CouchPotato explicitly setting my unRAID server's IP (tried localhost as well) and the correct port and tried admin/webui password Still getting a connection failed. Also tried the explicit user account I created in the auth file previously. Same result (see screenshot). Here's the dump of my auth file cat /mnt/user/appdata/DelugeVPN/config/auth localclient:7c57a288aee488c3fae64b48221d44f600283481:10 kingmetal:[password]:10 Looks unchanged from before the update. Did my changing the file screw things up? What's the missing line? I also noticed that the daemon port was not added to the DelugeVPN host file (https://github.com/binhex/docker-templates/blob/master/binhex/delugevpn.xml). Does it not need to be explicitly enabled? I would think it would have to be.
  11. First of all, binhex your Docker builds are amazing and I am learning a lot about Docker because of your projects. Thank you so much! I have two super-nooby issues with DelugeVPN: 1. I cannot get Couchpotato and Deluge to speak to each other. Are the IPTables set up as such to not allow access to the Deluge daemon port even if it is explicitly exposed to the host machine? I've added a line to the Deluge auth file with a username / password (and appropriate permission), I've allowed remote connections and I've mapped the daemon port to the host but CouchPotato still can't speak to Deluge. Black-holing seems to have weird issues for me, and I'd like to take advantage of CouchPotato's ability to hard-link and keep seeding. 2. This is the easy one: If I enable Privoxy, do I need to explicitly point Deluge to the proxy or does it 'just work'? If so, which protocol does it use? I'm able to point the browser of the machine I am typing this post on to my Privoxy server and it works like a charm, but I want to make sure I have it set up right with Deluge. I'm assuming it's just a no-auth HTTP proxy in this configuration? EDIT: I've realized that I may just be mistaken here as to the usefulness of Privoxy in my specific setup. I am routing all my traffic through a VPN provider (VikingVPN, who I cannot recommend highly enough for their service and customer support). Since they do no shaping of the traffic, are the advantages of Privoxy moot as my ISP will not be able to ascertain the type of traffic that Deluge is throwing out there? Is Privoxy just there to provide a way of sending traffic from other applications on my network through my VPN tunnell? Thank you so much!! My eventual plan is to write up some newbie-centric documentation for getting this all working.
  12. Been following this thread as a learning tool to get more familiar with Docker. I have a couple of questions, binhex: 1. First and foremost, did I miss a donation link somewhere? I know this is all in the spirit of community/fun but I'd be happy to buy you a beer/coffee/something over the internet for all your hard work. Or a book or SOMETHING! 2. Are you using container links in your deluge/VPN image? Or are you exposing ports on the virtual network interfaces? I know you're doing some IP tables mapping, and honestly I've got a very limited understanding of how all this should work, but I've been reading The Docker Book (http://www.dockerbook.com/) and the chapter I'm currently on is about inter-container communication. I've been trying to find the time to pull down your image and just poke around at it to answer some of these questions myself, but I'm still pretty bone-headed with this stuff. Just curious what methods you found - I know you're looking to expose the deluge daemon to other services (which is EXACTLY what folks like me need). Anyway, great stuff and THANK YOU.
  13. Those cards still go for $84-ish. Not the end of the world if it goes that way, but I'm surprised that nobody has found a low-cost card that works reliably. In a perfect world I'd be able to find 2 eSATA and 2 internal SATA ports for $30-$50 and call it a day.
  14. Yeah it does only have 5 ports on the board, which is kind of a bummer, but otherwise I like it a lot for the price. Assembled mine, but haven't gotten a chance to actually set up UnRAID. General question: I did some forum digging and it seems like SATA add-on cards are hit-and-miss unless you drop $70+ dollars on one. Are there any 'silver bullet' cards at a reasonable price that anyone can recommend to a Linux Newbie?
  15. I REALLY like those little HP servers and was eyeing them before I stumbled onto the Lenovo offerings. From my research, the Turion in that server is not sufficient for my needs: Plex tends to need a passmark score that exceeds 2000 to do a single 1080p stream, or so the forums say. Comparing the two here: http://cpuboss.com/cpus/Intel-Core-i3-4130-vs-AMD-Turion-II-Neo-N54L leads me to believe the Turion would fall down with more than one Plex stream - and for $25 more I'd rather not take the risk, although the form factor is PERFECT. I agree that running fewer, larger drives in the future as opposed to cramming more drives into the case is likely the direction I'd go - but once again, the TS140 is sufficient for my needs I think since I can easily fit 5 drives in it right away. Size is a factor, since I'm going to have to stare at this thing on a regular basis - but most importantly I'm worried about noise. The TS140 seems to have a very low noise profile (potentially due to inferior cooling). I think the TS440 is a better box and a much better value, I just think that for my specific needs the TS140 may do it for me. I REALLY appreciate the input and I will do some thinking on this over the weekend! If I can figure out a place in my house to keep the TS440 where noise won't be a major concern I will go that route, but the only places I can think of are closets and while I've run plenty of 'closet server farms' before in my day with great success it's a pretty bad idea! EDIT: went through and corrected the model numbers for my posts, coffee must not be working!