jademonkee

Members
  • Posts

    333
  • Joined

  • Last visited

Everything posted by jademonkee

  1. Poll attributes is the default of 1800. Let's hope for a resolution soon.
  2. Just a heads-up that I can't get the shut-down command to work in unRAID v 6.3.0 from my Android phone. I know that it previously worked, so I'm pretty sure it's due to the unRAID version upgrade, but I didn't try it before I upgraded, so can't be 100% certain. Cheers.
  3. Dunno if it's any help, but I also just noticed that every time I open the web GUI or change pages in the GUI, it adds another error line to the console. So I'm guessing that the error is appearing at every UI refresh? The lines appear by themselves every 60 seconds - even after I disabled the 'Page update frequency'.
  4. Cheers. Will delete it in a moment. FWIW I noticed that the error stops after I stop the array. EDIT: Deleted dynamix.plg while in safe mode, then rebooted. I still get the error.
  5. FYI I just booted to safe mode and get the same error at boot. Is the dynamic.plg the web interface? Will that leave me interfaceless? Thanks for your help (and prompt reply!)
  6. I have the same error appearing after boot since upgrading from 6.2.4 to 6.3.0. I've attached a system log taken after a reboot. To save you digging through the log for it, I have the following plugins installed: CA Auto Update Applications CA Backup / Restore Appdata CA Cleanup Appdata Community Applications Dynamix Cache Directories Dynamix SSD TRIM Dynamix System Statistics Dynamix webGui (obvs) Preclear Disks Recycle Bin Statistics Unassigned Devices Were any of these what you meant by 'dynamix.plg', Squid? unRAID-SystemLog20170205.txt
  7. I wouldn't rely on a proxy 100% and you wouldn't have the protection that's built into my VPN dockers, as in iptable blocking. So in short install delugevpn or rtorrentvpn and be happy :-) Ah cheers, thanks. Will have to look into a provider that offers more than two simultaneous connections, then.
  8. I finally got around to installing your excellent sabnzbd-vpn docker today and have configured it to work with my VPN and it's great. I'm using privoxy for Sonarr and it's working well, too. Given the success of the sabnzbd-VPN Docker, I'm now looking to install something for Torrents. Now, my question is: do I install the VPN version of Deluge, or can I just access the VPN running in my sabnzbd Docker using a privoxy (my VPN provider only allows me two simultaneous connections, so I'd prefer it if this is possible). If this IS possible, will Deluge stop dl/ul'ing if the VPN goes down? Like, will it bypass the proxy if it doesn't offer a connection to the net? Thanks for your help!
  9. Oh MAN! I just finished setting it up. Works a treat, and yesterday when I did a test d/l while it was connected to the RPi it was only at 2.5MB/s, but now it's at a near-full 7.5MB/s, so I think the RPi was acting as a bottleneck. Now to learn a bit about privoxy...
  10. I'd not seen that. Thanks for the tip. I'll have a go at configuring it now - there doesn't seem to be much documentation.
  11. Sorry for digging up an old thread, but I just set up an RPi as an outbound VPN, and pointed my server to it. However, it means that I can no longer access Plex while outside the home. Given that I have two NICs in my server, I was wondering the same question as you: is it possible to route all traffic for sabnzbd + Sonarr to the NIC using the VPN gateway, but keep all other traffic, including Plex, through the other NIC connected to the router's non-VPN gateway? Did you ever find out a way around this?
  12. It's also a bad idea to buy "... a cheap UPS." => As a minimum, the UPS should have automatic voltage regulation (AVR). The lowest price units do not have this feature. I suppose any UPS is probably better than no UPS -- but it's really not a good idea to go cheap on your power protection. I splurged for a good pure sine wave one, but I'll be sure to make sure that one he picks has AVR. Cheers again for the info.
  13. Two comments r.e. this ... (1) Cutting power (as opposed to doing an orderly shutdown) is a BAD idea -- he'll almost certainly end up with corrupted data (2) A UPS is IMHO a MANDATORY part of a server. Not having one begs for data loss even if you don't have housemates who are likely to cause what's effectively a "power loss" Yeah, I understand both those things (I use a UPS) and told him as much. And keep telling him. He understands it, too. I should be a bit more honest - it's not really the housemates, although they don't know how to shutdown the server, which is a concern if he's not around (thus me saying he should install a RPi VPN). The few times (well, sort of 'few' - it happened three times in the week between me installing and pre-clearing it and coming back to finish setting it up) it's happened is because servicemen have either cut the house power to repair something while he's not around (twice), or the cleaner disconnected it (once). So yeah, hopefully those were all isolated incidents, and a power loss won't happen before the time comes where he gets himself a cheap UPS.
  14. Cheers for the advice. Looking at Crashplan now, it does look rather simple - and actually much cheaper than I expected (for a single computer) for the cloud offering. Plus, his housemates have a habit of cutting power to his server (he's still educating them... and also saving up for a UPS), so the assumed interruptability and automatic start-againness (call me Shakespeare) of CrashPlan, as well as its easily scheduling, would be ideal for this situation. Set and forget, baby! So I reckon I'll start with the free version, and then decide on if the paid version is worth the extra, given its ease of use and access. Thanks again!
  15. CrashPlan is something I'd not considered, thanks. I'll look into that. Bringing the servers together is not really possible, as neither of us have cars (1 hour via public transport carrying a server...) and I'd rather avoid spending the £25 on an Uber.
  16. Hi there, I recently convinced a friend that unRAID was definitely a better option for him than a new NAS, and have set him up with a neat little HP Microserver much like my own. He has a 200 Mb fibre connection, and only has two disks in his server at the moment, so I thought I'd take up one of his bays with a disk of my own, and use it as off-site backup for my music and photos. I have two questions: It'll be a 2TB disk, and the initial copy will be just north of 1 TB of data. Obviously I don't want to do this over the internet if I can avoid it. Is it possible, then, to attach the disk to my unRAID server, format it, copy over the initial data, then bring the disk over to his place and install it? If so, what's the procedure? How do I remove it from my array, then correct the parity? And how do I install it in his array, then correct the parity? My next question is: What's the best way to keep two remote servers in-sync? I don't mean real-time: I'm happy to manually run a script/command every so often, or set one to trigger periodically, but I don't know what would be best to run. I was thinking of installing a Raspberry Pi at his place and running OpenVPN on it (super easy using this: http://www.pivpn.io/), which would also be handy for remote management if he comes across a problem he can't fix (I'm aware that we can run OpenVPN as a Docker, but I'd rather have it as a separate device), or for him to shut down/boot up his server while he's away. So, with this setup, would I be able to VPN into his network (via Tunnelblick on my Mac), SSH into his server (using Terminal, obvs), then just run an rsync command? Or do I run the command from my server? What would be the best rsync command(s) if I want to populate changes from two of my shares (including deletions) to my disk in his array? And should I set up an identical user account to my own on his server? Sorry for all the noobness, and thanks for your help.
  17. Yeah, good thinking. I shall keep config on the cache. Thanks. I'm always keen for an upgrade
  18. Yeah, I've never downloaded anything so large before - my fibre internet connection kept chucking a fit while it was downloading, too. Like Scotty: "I canna give it any more, Cap'n!" or something like that. It was a collection of many files: first the many rars, then many small files unpacked from them, which is why the Mover could do its thing while the archive was being unpacked (I was lucky in that regard). Given that it's so rare for me to dl files that large, perhaps I can just work around it like I did this time. I don't really want to move /config onto the array. Thanks for the pointer re the config directory.
  19. Hi there, I have a 124 GB cache pool across two SSDs. For the first time yesterday I started downloading a NZB that was greater than the size of the cache pool (200GB). I didn't think it was set to, but SABnzbd was saving the temp download files to the cache drive. So a few times during the download I had to pause the download, stop SABnzbd, invoke the mover, then start SABnzbd + the download again after the mover had completed. Then, once the download had finished, I noticed that it was unpacking to the cache drive - I believe this is because it's extracting to a User Share that uses the cache, and I didn't think to disable that option until it had started unpacking. As such, I think I've solved the unpacking part of the problem (I'll create a 'large SABnzbd files' share that big SABnzbd files can be extracted to. It won't use the cache drive, and I will set it as the destination for a particular SABnzbd category - if I'm wrong in this solution, please let me know). Anyway, I only noticed that it was unpacking to the cache when I got a warning that the cache drive was filling up, so I invoked the mover, and it seems to be emptying only a fraction slower than it's filling up as the files unpack, so I think I've avoided it becoming critically full. I was unsure if the mover would be happy performing this, but it seems to be working fine. So, after all this, I have two questions: [*]How do I stop SABnzbd from using the cache drive for its temp folder? I have set the incomplete_downloads folder to /mnt/user/bigfiles/sabnzbd/incomplete (as at the moment, that share does use cache, but I have previously set it to *not* use cache, and SABnzbd still stored the downloading files on the cache drive (I could see it filling up). So it must be a setting somewhere else, yes? [*]What would happen if I *didn't* invoke the mover while SABnzbd was extracting a huge archive? Like, say I'm away from my computer while this happens - will it end in catalcysm? Or will it just start writing directly to the array like it ain't no thang? I didn't want to risk the scenario, which is why I invoked the mover so many times, but if the archive extraction seamlessly starts extracting to the Array, then it won't be a problem, will it? Anyway, thanks for your help.
  20. Ah yes! I see now. Thanks very much, johnnie.black (again - you've helped me in the past, too)!
  21. Hi there, I've just created a cache pool for the first time (1x 128GB Samsung 840 PRO SSD; 1x 120GB Corsair Force 3 SSD). I plugged them both in at the same time, and had taken them directly from my Windows PC (they haven't been cleared or formatted). Upon boot, unRAID threw up a temperature error for the Corsair, saying it was running at 128ºC, even though it was cool to the touch - turns out it doesn't have a temperature sensor built in (http://forum.corsair.com/v3/showthread.php?t=102250). So I assigned the Samsung as cache1, and the Corsair as cache2. Upon starting the array, I was asked to format the Samsung, but not the Corsair. I thought it might be related to the temperature problem, so I set the warning and critical disk temperature thresholds to 200ºC, but it didn't appear. I stopped and re-started the array, but the drive still isn't able to be formatted. As such, the cache size still reads as 128GB (although, I can't find any info on how big the cache should be in a cache pool). Anyway, is this normal behaviour? Or is something peculiar happening with my Corsair drive? I've attached a screenshot of my main page, so you can see what I mean. Thanks very much.
  22. Hi there! I notice that the front page of the LMS GUI is saying that there is a new version available. Do I have to wait for a new docker version to come out, or can I run the update command: sudo dpkg -i /config/cache/updates/logitechmediaserver_7.9.0~1469176740_amd64.deb Just not sure as I haven't run LMS as a Docker before. Many thanks.
  23. Good to know. I've just flashed my Dell H310 to IT mode, and have plugged in the MicroServer's Drive cage (all my HDDs), as well as a MiniSAS > 4x SATA adapter (for my SSD cache drive). About to boot now, so will see how speeds are affected. Thanks again for your help and info. EDIT: Just started a parity check, and it is running at 103MB/s