dukiethecorgi
-
Posts
107 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by dukiethecorgi
-
-
The other possible solution is to look at your client, and see WHY it's transcoding. Right now it looks like your client is reporting to the server that it can't handle direct play
-
Just wanted to update, after upgrading to 6.3.3, I have not had any issues with call traces and out of memory.
-
58 minutes ago, ysu said:
Nope, PIA alone - fine. Torrent w/o PIA - fine. Only the two together is the problem.
I have tested this out earlier. PIA on, then speedtest and downloading a large-ish file eg from S3. Near-perfect. Pings are still single digit, speed drop is negligible.
But thanks for your suggestion.
Try different combinations of settings in your .ovpn config file. PIA allows udp on ports 53, 1194, 1197, 1198, 8080 and 9201. tcp on 80, 110,443, 501, and 502. I've found a huge difference in throughput between the different combinations. Also disable ipv6 if you can, that made a big difference. I was able to get VPN speed of about 90% of direct speed
-
Just now, jonathanm said:
I just installed it myself quite easily, don't yet know if it's going to make a difference with PIA though.
I don't think you can install it with the webgui, but with the remote GTKUI client I just installed the python 2.7 version and it seems to work.
Good to know, I couldn't get it to install but will try again via the GTK
I found that it does make a difference, after tweaking a bit I was able to get downloads at 85% of my line speed
-
Would it be possible in a future release to have the ITConfig plugin installed?
-
On 3/27/2017 at 4:42 AM, ysu said:
Hey guys, I've finally got it working - I've purchased PIA after all.
Got a question though:
How can I migrate my torrents from my win7 PC's deluge to the unraid server's delugeVPN?
http://forum.deluge-torrent.org/viewtopic.php?t=14635
It sort of worked for me - the torrents were all there, but I needed to do a force recheck for them to reattach properly
-
On 3/27/2017 at 7:54 PM, Mistershiverz said:
I am having issues with Ombi using large amounts of memory at the moment it is using 4gb or ram is this to be expected as by the end of the day it will have used all 8gb of the ram in my system
Take a look at the ombi log, and see if you have a lot of events that contain "ProviderID". If so, stop the container, go to Plex and under Server settings delete any Ombi tokens, start the container and the go to Ombi settings and request a new token
-
Do these H310 Perc adapters work with drives >4tb?
-
Thanks! I'll try it tonight
-
On 3/20/2017 at 6:42 AM, binhex said:
what vpn provider are you with?, if its not PIA then you will need to setup a port forward manually, if it is PIA then you need to ensure the endpoint your connected to allows port forwarding, also check the paths are correctly defined in the deluge webui, you should be pointing at /data for incomplete/completed
I'm not sure I understand. I use an up script on OpenVPN to query PIA for the port number, and then push that to Deluge. Using this docker, is it no longer necessary to do that?
-
Let me apologize in advance for the moronic questions, but I'm an absolute beginner when it comes to linux/dockers/etc ....
I telnet into unRAID, and use 'docker exec -it letsencrypt /bin/bash' to get to the command line. When I try testing by 'sendmail [email protected] < /tmp/testmail.txt' I get the response 'can't connect to remote host (127.0.0.1): Connection refused' which I am guessing means that sendmail isn't configured. I look in /etc and I can't find anything - no mail or sendmail folder, no sendmail.conf, nothing at all. Using find to search the entire image, I still don't see anything.
I'm completely lost, what am I doing wrong? Appreciative of any advice you could give.
-
Great container, replaces a VM I had running that had the same functions, but took a lot more resources.
In the VM I had running, I used sSMTP to send fail2ban emails. Would it be possible to add this in a future release? Or is there already a way of sending email in the container?
-
That's certainly worth a try. The docker containers are trivial to recreate. I'll report back in a few days.
-
Thanks RobJ
Changed the mover to once a day, removed some plugins, stopped caching files, and only use the cache drive for docker storage
After all that, it still grinds to a stop after every 3-4 days. I'm seriously considering pulling all the data off and switching to another OS, this system just isn't usable as it exists.
Is there any progress to finding the root cause of this problem? The forum has quite a few posts by people that are seeing call traces and unresponsive GUI, surely I'm not the only one seeing this problem. Could I roll back to an earlier version that doesn't have this issue?
-
First, rebooting the server seemed to fix the problem for the time being.
I did disable the cache dirs plugin, which did have a noticeable effect on CPU usage, not so much on memory.
Just prior to this happening, I added quite a few large (>12GB) video files to the server. I noticed the cache disk nearly filled. Mover is set to run every couple of hours, so perhaps the mover had issues with the big files. I'm also reconsidering caching my media share, since those files are mostly read and rarely write
I appreciate the responses
-
rebooting made everything work fine again, but I'm really curious why I had all those errors.
Can someone take a look and explain what was happening?
-
Serving up files Ok, but web interface is very slow. Logs show call traces and out of memory.
Suggestions? Reboot and hope for the best?
Logs attached
-
I'm having a lot of trouble getting certain containers to work. I have unRAID on one machine, and Plex and Deluge on different machines.
When I try to install containers like PlexPy, it asks for the location of the Plex log files. On the Plex machine, I shared the log folder, and on unRAID I created a SMB share for that location. I install the container, and point the location of the log files to the SMB share I created. This doesn't work, as the PlexPy docker log is filled with I/O errors. If I change the log file location to a dummy location on the unRAID server, the container does start up properly.
How can I use containers when the data they need is located outside of the unRAID server? Do I need to reconfigure all my other programs to use shares on unRAID instead of local locations for this to work?
-
On 2/18/2017 at 3:09 PM, Yousty said:
For anyone curious, I ended up going with the ASRock 990FX Extreme9 Mobo ($130 after rebate) and the AMD FX-8350 CPU ($149) which Passmark shows as having an average of 9,000 score so hopefully that should be good enough for two 1080p streams.
I'm using a FX-8320E @ 4.5ghz in a stand alone Plex server and it can do 3-4 1080 transcodes, so you should be fine
-
I'd imagine it is to do with whether Plex is just streaming it or transcoding it on the fly. Plain streaming doesn't take much grunt.
Sorry, should have been more clear - it can transcode 3-4 streams, as long as they're .264 not HEVC
-
Any reason to choose one or the other between this and Deluge?
I like Deluge myself, but it has issues handling a lot (>400) of torrents. The interface gets a bit sluggish, which I could live with, but the real problem is that communication with apps like Sonarr and Radarr becomes unreliable. Rtorrent has been rock solid for me
-
I have a friend who does some content creation, and asked me if this was possible so he can convert the raw footage into a few different formats. My use case would be:
1. Video file is place in 1 of 3 user share folders.
2. Depending on which folder i put it in, a different preset would be run on handbrake and it would be re-encoded.
3. The output file would go into a destination folder depending on the initial folder.
4. a clean up would be done to remove the original file. if it can't be re-encoded, or it fails, said file goes into a rejected folder so i can look at it later. if at all possible, i'd love some sort of log from the handbrake failure. way back in my early dos days, i'd just pipe the output to a 'log'. i'm assuming that's possible as well.
I do something similar in a windows network. I'm re-encoding some things in HEVC, so I'm using my Solidworks workstation at night. I wrote a powershell script that:
- Looks in a source folder and subfolders and makes a list of files
Compares it to the destination folder and subfolder to create a list of files to be converted
Passes each filename to Handbrake CLI along with the encode parameters. Output is set to the destination folder
Checks to be sure there is at least one hour before 'work time' starts
Continues to the next file
Really not much to it. When you get a Handbrake docker installed I can't see why you can't do the same in bash
- Looks in a source folder and subfolders and makes a list of files
-
The last time I compared both, when Avatar just came out, Plex had a hard time with it, no issue whatsoever on JRiver.
Has Plex client caught up ? Is it up to par now (Feb/Mar 2017) ?
That's really strange, because my plex server uses a $150 cpu/mb bundle I picked up at Microcenter, and can easily handle 3-4 streams. Or is the Win 10 client you mentioned virtualized?
-
You must let unRAID format any disk it will use for cache or array so if your existing disks have contents you will have to consider that.
Yeah, I saw that. I have an external 6TB drive to aid in the process, and everything is backed up to Amazon Cloud just in case of disaster. It will only take 80-90 hr's to restore
Best practice for disk replacement
in General Support
Posted
Hey all-
Got a new 4tb disk to replace an aging 2tb drive in the array. Haven't replaced a disk yet, I'm unsure about the best way to proceed. Is it recommended to use the unBalance plugin to move the data off the disk before proceeding? Or just swap it and let it use the parity disk to rebuild? If I go that route, I assume I should manually start a parity check just before replacement?
Also, I'm not sure I understand purpose of the preclear plugin - do I need to preclear the disk before installing? It's a brand new disk, never used.
I appreciate any guidance you can offer