weirdcrap

Members
  • Posts

    457
  • Joined

  • Last visited

Everything posted by weirdcrap

  1. I have noticed this running through my docker log for DelugeVPN (referring to the "Error: Invalid port specification" & the "malformed expression"): 2016-04-24 14:09:27,263 DEBG 'deluge-script' stdout output: [info] Sleeping for 5 mins before rechecking listen interface and port (port checking is for PIA only) 2016-04-24 14:14:27,270 DEBG 'deluge-script' stdout output: [info] Deluge listening interface IP 10.117.1.6 and VPN provider IP 10.117.1.6 match 2016-04-24 14:14:27,342 DEBG 'deluge-script' stderr output: Error: Invalid port specification: 2016-04-24 14:14:27,342 DEBG 'deluge-script' stdout output: [info] Deluge incoming port closed 2016-04-24 14:14:28,089 DEBG 'deluge-script' stdout output: [info] Reconfiguring for VPN provider port 2016-04-24 14:14:28,089 DEBG 'deluge-script' stdout output: [info] Setting listening interface for Deluge... 2016-04-24 14:14:28,530 DEBG 'deluge-script' stdout output: Setting listen_interface to 10.117.1.6.. Configuration value successfully updated. 2016-04-24 14:14:28,574 DEBG 'deluge-script' stdout output: [info] Setting incoming port for Deluge... 2016-04-24 14:14:29,008 DEBG 'deluge-script' stdout output: Setting random_port to False.. Configuration value successfully updated. 2016-04-24 14:14:29,478 DEBG 'deluge-script' stderr output: [ERROR ] 14:14:29 main:347 malformed expression (,) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/deluge/ui/console/main.py", line 344, in do_command ret = self._commands[cmd].handle(*args, **options.__dict__) File "/usr/lib/python2.7/site-packages/deluge/ui/console/commands/config.py", line 102, in handle return self._set_config(*args, **options) File "/usr/lib/python2.7/site-packages/deluge/ui/console/commands/config.py", line 136, in _set_config val = simple_eval(options["set"][1] + " " .join(args)) File "/usr/lib/python2.7/site-packages/deluge/ui/console/commands/config.py", line 85, in simple_eval res = atom(src.next, src.next()) File "/usr/lib/python2.7/site-packages/deluge/ui/console/commands/config.py", line 54, in atom out.append(atom(next, token)) File "/usr/lib/python2.7/site-packages/deluge/ui/console/commands/config.py", line 77, in atom raise SyntaxError("malformed expression (%s)" % token[1]) I think this might have to do with the fact that I changed the privoxy port so deluge and sabnzbd could run side by side but figured I had better check with you to be sure. Even though privoxy is disabled in sab it still appears to bind the port so deluge wouldn't start until I changed it. Is it safe to just remove the port bindings from both if I don't intend on using privoxy with either of them?
  2. For clarification of your statement on GitHub about Privoxy: "It also includes Privoxy to allow unfiltered access to index sites." Privoxy is only necessary if your ISP is filtering access to index sites and privoxy tunnels that access through the VPN? So if I don't have any ISP filtering like that I can simply leave privoxy disabled? EDIT: Nevermind, found the explanation I was looking for in your DelugeVPN support thread. To answer my own question, yes.
  3. Will this be something handled by the update process when that feature is removed? Or will users need to go in and manually un-map the transcode directory when this change takes affect? I had to unmap mine What version of PMS are you using, the latest update? I tried removing the mapping after I updated today and plex got pissed and wouldn't play anything that required transcoding. Sent from my XT1096 using Tapatalk I'm running 0.9.16.4 I'm referring to the DOCKER mapping I had for /transcode, not the one under PMS/Server/Transcoder. I've let that one blank so that it just defaults to config has intended. I am also on 0.9.16.4. That is probably where my issue is, I had removed the docker mapping but left the mapping to /transcode under PMS/Server/Transcoder as I was fairly certain that was the default value when I first installed PMS. EDIT: Yes that was indeed my problem, thanks!
  4. Will this be something handled by the update process when that feature is removed? Or will users need to go in and manually un-map the transcode directory when this change takes affect? I had to unmap mine What version of PMS are you using, the latest update? I tried removing the mapping after I updated today and plex got pissed and wouldn't play anything that required transcoding. Did you have to change the directory for transcoding in the server interface as well to something other than /transcode? I assumed that since plex would force transcoding to the config directory I wouldn't need to change anything in the server settings besides removing the docker mapping. Sent from my XT1096 using Tapatalk
  5. Will this be something handled by the update process when that feature is removed? Or will users need to go in and manually un-map the transcode directory when this change takes affect?
  6. Write speed will degrade without TRIM but on a decent SSD still be much faster than HDD. Still recommend you do frequent parity checks in the beginning, sync errors should always be 0, let us know if you keep it and get sync errors. Let me just add, as I was not very clear above, that in your case, and assuming your parity is an HDD, write speed will be limited by its speed, SSD will only improve reads. Yes everything in my array is an HDD except for this one disk. I am running a parity check on the array now and will report my results. The write speeds aren't a huge deal to me, the main reason I wanted the SSD for my plex is so it could serve my library information faster and to more people without any hangups from the library infomation itself. I noticed that when the library was on my cache drive even just viewing my own content locally was slow My actual media is spread across enough disks that I don't think I would run into any bottlenecks unless a number of people are all trying to stream from the same disk at the same time (which I think will be rather unlikely). EDIT: Parity check is finished and no errors have been reported. I will keep an eye on it and run a check about once a week for the next couple weeks and see what comes up. So a follow up to this if anyone is interested I have been running with a single SSD as part of my array for a few months now and have noticed no issues with parity errors or any other data integrity issues. Without further testing I don't want to make any sort of absolute statement but it appears that you can have SSDs in your array as long as you don't have TRIM enabled on them. If anyone else would like to weigh in on there experiences with this sort of configuration I'm sure it would be helpful to the community.
  7. Write speed will degrade without TRIM but on a decent SSD still be much faster than HDD. Still recommend you do frequent parity checks in the beginning, sync errors should always be 0, let us know if you keep it and get sync errors. Let me just add, as I was not very clear above, that in your case, and assuming your parity is an HDD, write speed will be limited by its speed, SSD will only improve reads. Yes everything in my array is an HDD except for this one disk. I am running a parity check on the array now and will report my results. The write speeds aren't a huge deal to me, the main reason I wanted the SSD for my plex is so it could serve my library information faster and to more people without any hangups from the library infomation itself. I noticed that when the library was on my cache drive even just viewing my own content locally was slow My actual media is spread across enough disks that I don't think I would run into any bottlenecks unless a number of people are all trying to stream from the same disk at the same time (which I think will be rather unlikely). EDIT: Parity check is finished and no errors have been reported. I will keep an eye on it and run a check about once a week for the next couple weeks and see what comes up.
  8. That would be awesome, I was wondering if it would be ok if I just didn't enable TRIM. From my understanding (after a read through of TRIM on Wikipedia) the only real downside of not having TRIM is a reduction in write speed to the SSD (and drive life) when it is working with blocks that the OS has flagged as free and needs to be overwritten, correct?
  9. I am running unRAID 6.1.8 So without first researching the issue I stupidly assumed that I could put an SSD into my protected array and use it to store my plex library files . I wanted to move them off the cache drive as it is an old drive and I wanted the files protected in case the drive failed so I wouldn't have to completely rebuild my library. Browsing around on the lime-tech site today I noticed that this is actually entirely unsupported and could cause data loss due to TRIM. So my question to all you lovely people is what would be my best course of action for removing the SSD from the array? I know there are instructions here but they haven't been updated for unRAID 6 yet so I didn't know if that was still the right way to go about this. My new plan is to remove the old cache drive and replace it with the 250GB SSD and just leave my docker.img and plex files on it. Probably setup an rsync script to run nightly to backup the plex library files in case the drive ever dies.
  10. Ok I created a test account so I wouldn't hose my real dropbox if I did something stupid. Where exactly is the dropbox.py file, in the docker.img itself? I assumed that it would be in one of the mapped container volumes but I can't seem to find it. Well I think I found. It appears as though dropbox.py is in /usr/local/bin/dropbox.py inside the docker but when I run the exclude add command I get an error. I am trying to follow along loosely with these instructions but am getting an error that Dropbox isn't running. The command I tried: root@VOID:/# docker exec Dropbox /usr/local/bin/dropbox.py exclude add Dropbox/test Dropbox isn't running!
  11. That did it! It performs exponentially better when formatted as xfs vs ext4. The media starts after only a few second delay. Thanks!
  12. Try running swapoff on this swapfile and then swapon again. Same issue. It will create the swap file fine but it won't start it. Let me know what info you might need from me to troubleshoot. root@VOID:~# swapoff -v /mnt/cache/swapfile swapoff on /mnt/cache/swapfile swapoff: /mnt/cache/swapfile: swapoff failed: Invalid argument root@VOID:~# swapon -v /mnt/cache/swapfile swapon on /mnt/cache/swapfile swapon: /mnt/cache/swapfile: found swap signature: version 1, page-size 4, same byte order swapon: /mnt/cache/swapfile: pagesize=4096, swapsize=4294967296, devsize=4294967296 swapon: /mnt/cache/swapfile: swapon failed: Invalid argument
  13. Had some issues with my cache drive yesterday and ended up reformatting it. Now when I try to start the swap I get "swapon failed: Invalid argument". I am creating the swap on /mnt/cache as recommended and I didn't have any issues with the plugin until I reformatted the cache drive. Any ideas? Tried to do it manually via SSH and encountered the same error: root@VOID:~# swapon -v /mnt/cache/swapfile swapon on /mnt/cache/swapfile swapon: /mnt/cache/swapfile: found swap signature: version 1, page-size 4, same byte order swapon: /mnt/cache/swapfile: pagesize=4096, swapsize=4294967296, devsize=4294967296 swapon: /mnt/cache/swapfile: swapon failed: Invalid argument
  14. No, it hadn't even started playing after 20 minutes. The size of the folder on the transcode drive slowly grew but the video never actually started playing. When I finally gave up and cancelled it, it had only transcoded about 250MB of a 4GB video file. I have been using the same video file for all my transcode testing and when I had it going to the cache drive (or to RAM) it started playback after about five to ten seconds. I change the local streaming quality in my plex media player (this is all inside my LAN for testing) to around 480p so I can get an idea of the processor utilization for a lowish quality stream and see how much disk IO it creates. The end goal being for me to be able to figure out how many simultaneous streams my server can transcode without noticeable performance degradation.
  15. This may or may not be the right place for this as I am unsure if it is specific to the docker or not but figured this would be a good place to start. I installed a hard drive I had laying around (a 320GB disk formatted as ext4) to act as a dedicated disk for transcoding so as not to beat on my cache drive constantly. I didn't want to add the disk to the array or add it as a cache disk so I am utilizing dlandon's Unassigned Devices plugin to mount the drive on boot. I setup my docker to map /transcode to /mnt/disks/transcode/ and made sure that the permissions were set to read/write. I can see that the transcode is working and being written to the disk but it is unbelievably slow, I think the transcode has written about 40MB in two to three minutes to the disk. From what I can tell this isn't an issue with the disk itself as I test copied 1.5GB of video files and it completed in about a minute averaging 90MB/s. Has anyone ever tried to setup the plex docker like this before who might have some pointers for me? Or at least clarify whether this is an issue specific to me and my setup or if others can reproduce the same results.
  16. Well it turns out the icons are there and display correctly when viewing Tower/Main but not under Tower/Main/UnassignedDevices. I tried to view the page in both Firefox and Chrome after clearing the browser cache. That is quite curious. You put Tower/Main/UnassignedDevices in the browser URL. I did that and reproduced what you found. I didn't know that even worked. Let me take a look. Well I am still new to UnRAID 6 and the whole plugin/docker interface and I found that by simply clicking the plugin image under the installed plugins page (at least for some plugins like swap file) I could quickly get to the settings/interface for that plugin. So that is what I did with this plugin thinking that was the quickest way to get to the interface.
  17. Does your Dropbox docker support selective sync (ie I can pick which folders I want to be synchronized)? Do any of the current Dropbox dockers support this? I know it is possible in the official application so if not I might just make a Linux VM and use the official dropbox application for my needs.
  18. Well it turns out the icons are there and display correctly when viewing Tower/Main but not under Tower/Main/UnassignedDevices. I tried to view the page in both Firefox and Chrome after clearing the browser cache.
  19. I just installed your plugin and it seems to be working fine (I haven't tried to mount anything yet as I am not home) except I seem to be missing the log and script icons in the webgui. I tried removing the plugin and reinstalling from the repo to no avail. Looking in the flash\config\plguins\unassigned.devices\ directory there is no icons folder (which is where the plugin appears to be looking for these icons). Here is a link to my Diagnostics dump: https://dl.dropboxusercontent.com/u/70586667/void-diagnostics-20160210-1707.zip Let me know if there is further information you would like me to provide.
  20. So after spending all day on the forums looking around I have discovered the source of my issues. It turned out to be not having enough memory installed. That's why my diagnostics utility wasn't working and why this plugin wouldn't install.
  21. As far as I know, the plugin is dated 2015-12-01 & I just upgraded to the new version of unRAID last week. It is possible this is part of a larger issue as the "Diagnostics" tool doesn't seem to be working for me either so my upgrade might have had some hiccups. When I try to run the tool it gathers all the information it needs (I assume) but when it redirects me to download the zip file I get a 404 not found error.
  22. On the Dashboard under System Status, what do you have for flash : log : docker it says not available for docker, see attached screenshot. dlandon: I don't seem to have the powerdown folder in the plugins directory at all? root@VOID:/boot/config/plugins# ls dynamix/ dynamix.apcupsd/ dynamix.kvm.manager/ swapfile/ swapfile.plg* This is all I have in there right now. I had a previous version of the powerdown script installed in v4 & v5 however I followed the upgrade instructions and deleted the plugin folder and all the other folders it listed to remove so I am not sure what the issue might be. The Swap plugin installed fine as you can see.
  23. Can you ping github.com from your server? I am able to ping github without issue and google DNS is distributed by DHCP to all the machines at my house. The plugin Downloaded to my server fine from what I can tell (I apologize I was in a hurry this morning and didn't copy that part of the log). Should I try to download the plugin manually and install from a file maybe? This is the full log from me trying to install the plugin: plugin: installing: https://github.com/dlandon/powerdown/raw/master/powerdown-x86_64.plg plugin: downloading https://github.com/dlandon/powerdown/raw/master/powerdown-x86_64.plg plugin: downloading: https://github.com/dlandon/powerdown/raw/master/powerdown-x86_64.plg ... done Warning: simplexml_load_file(): /tmp/plugins/powerdown-x86_64.plg:1: parser error : Document is empty in /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin on line 193 Warning: simplexml_load_file(): in /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin on line 193 Warning: simplexml_load_file(): ^ in /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin on line 193 Warning: simplexml_load_file(): /tmp/plugins/powerdown-x86_64.plg:1: parser error : Start tag expected, '<' not found in /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin on line 193 Warning: simplexml_load_file(): in /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin on line 193 Warning: simplexml_load_file(): ^ in /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin on line 193 plugin: xml parse error
  24. Just tried to install this through the plugin manager and received these errors: Warning: simplexml_load_file(): /tmp/plugins/powerdown-x86_64.plg:1: parser error : Document is empty in /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin on line 193 Warning: simplexml_load_file(): in /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin on line 193 Warning: simplexml_load_file(): ^ in /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin on line 193 Warning: simplexml_load_file(): /tmp/plugins/powerdown-x86_64.plg:1: parser error : Start tag expected, '<' not found in /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin on line 193 Warning: simplexml_load_file(): in /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin on line 193 Warning: simplexml_load_file(): ^ in /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin on line 193 plugin: xml parse error I am using the latest version of unRAID v6.1.6