Jump to content

dukiethecorgi

Members
  • Posts

    107
  • Joined

  • Last visited

Posts posted by dukiethecorgi

  1. Hey all-

     

    Got a new 4tb disk to replace an aging 2tb drive in the array.  Haven't replaced a disk yet, I'm unsure about the best way to proceed. Is it recommended to use the unBalance plugin to move the data off the disk before proceeding?  Or just swap it and let it use the parity disk to rebuild?  If I go that route, I assume I should manually start a parity check just before replacement?

     

    Also, I'm not sure I understand purpose of the preclear plugin - do I need to preclear the disk before installing?  It's a brand new disk, never used.

     

    I appreciate any guidance you can offer

  2. 58 minutes ago, ysu said:

    Nope, PIA alone - fine.  Torrent w/o PIA - fine.  Only the two together is the problem.

     

    I have tested this out earlier.  PIA on, then speedtest and downloading a large-ish file eg from S3.  Near-perfect. Pings are still single digit, speed drop is negligible.

     

    But thanks for your suggestion.

    Try different combinations of settings in your .ovpn config file.  PIA allows udp on ports 53, 1194, 1197, 1198, 8080 and 9201. tcp on 80, 110,443, 501, and 502.  I've found a huge difference in throughput between the different  combinations.  Also disable ipv6 if you can, that made a big difference.  I was able to get VPN speed of about 90% of direct speed

  3. Just now, jonathanm said:

    I just installed it myself quite easily, don't yet know if it's going to make a difference with PIA though.

     

    I don't think you can install it with the webgui, but with the remote GTKUI client I just installed the python 2.7 version and it seems to work.

    Good to know, I couldn't get it to install but will try again via the GTK

     

    I found that it does make a difference, after tweaking a bit I was able to get downloads at 85% of my line speed

  4. On 3/27/2017 at 7:54 PM, Mistershiverz said:

    I am having issues with Ombi using large amounts of memory at the moment it is using 4gb or ram is this to be expected as by the end of the day it will have used all 8gb of the ram in my system

     

    Take a look at the ombi log, and see if you have a lot of events that contain "ProviderID".  If so, stop the container, go to Plex and under Server settings delete any Ombi tokens, start the container and the go to Ombi settings and request a new token

  5. On 3/20/2017 at 6:42 AM, binhex said:

     

    what vpn provider are you with?, if its not PIA then you will need to setup a port forward manually, if it is PIA then you need to ensure the endpoint your connected to allows port forwarding, also check the paths are correctly defined in the deluge webui, you should be pointing at /data for incomplete/completed

     

    I'm not sure I understand.  I use an up script on OpenVPN to query PIA for the port number, and then push that to Deluge.  Using this docker, is it no longer necessary to do that?

  6. Let me apologize in advance for the moronic questions, but I'm an absolute beginner when it comes to linux/dockers/etc ....

     

    I telnet into unRAID, and use 'docker exec -it letsencrypt /bin/bash' to get to the command line.  When I try testing by 'sendmail [email protected] < /tmp/testmail.txt' I get the response 'can't connect to remote host (127.0.0.1): Connection refused' which I am guessing means that sendmail isn't configured.  I look in /etc and I can't find anything - no mail or sendmail folder, no sendmail.conf, nothing at all.  Using find to search the entire image, I still don't see anything.

     

    I'm completely lost, what am I doing wrong?  Appreciative of any advice you could give.

  7. Thanks RobJ

     

    Changed the mover to once a day, removed some plugins, stopped caching files, and only use the cache drive for docker storage

     

    After all that, it still grinds to a stop after every 3-4 days.  I'm seriously considering pulling all the data off and switching to another OS, this system just isn't usable as it exists.

     

    Is there any progress to finding the root cause of this problem?  The forum has quite a few posts by people that are seeing call traces and unresponsive GUI, surely I'm not the only one seeing this problem.  Could I roll back to an earlier version that doesn't have this issue?

  8. First, rebooting the server seemed to fix the problem for the time being.

     

    I did disable the cache dirs plugin, which did have a noticeable effect on CPU usage, not so much on memory.

     

    Just prior to this happening, I added quite a few large (>12GB) video files to the server.  I noticed the cache disk nearly filled.  Mover is set to run every couple of hours, so perhaps the mover had issues with the big files.  I'm also reconsidering caching my media share, since those files are mostly read and rarely write

     

    I appreciate the responses

  9. I'm having a lot of trouble getting certain containers to work.  I have unRAID on one machine, and Plex and Deluge on different machines.

     

    When I try to install containers like PlexPy, it asks for the location of the Plex log files.  On the Plex machine, I shared the log folder, and on unRAID I created a SMB share for that location. I install the container, and point the location of the log files to the SMB share I created.  This doesn't work, as the PlexPy docker log is filled with I/O errors.  If I change the log file location to a dummy location on the unRAID server, the container does start up properly.

     

    How can I use containers when the data they need is located outside of the unRAID server?  Do I need to reconfigure all my other programs to use shares on unRAID instead of local locations for this to work?

  10. On 2/18/2017 at 3:09 PM, Yousty said:

    For anyone curious, I ended up going with the ASRock 990FX Extreme9 Mobo ($130 after rebate) and the AMD FX-8350 CPU ($149) which Passmark shows as having an average of 9,000 score so hopefully that should be good enough for two 1080p streams.

     

    I'm using a FX-8320E @ 4.5ghz in a stand alone Plex server and it can do 3-4 1080 transcodes, so you should be fine

  11. I'd imagine it is to do with whether Plex is just streaming it or transcoding it on the fly.  Plain streaming doesn't take much grunt.

     

    Sorry, should have been more clear - it can transcode 3-4 streams, as long as they're .264 not HEVC

  12. Any reason to choose one or the other between this and Deluge?

     

    I like Deluge myself, but it has issues handling a lot (>400) of torrents.  The interface gets a bit sluggish, which I could live with, but the real problem is that communication with apps like Sonarr and Radarr becomes unreliable.  Rtorrent has been rock solid for me

  13. I have a friend who does some content creation, and asked me if this was possible so he can convert the raw footage into a few different formats. My use case would be:

     

    1. Video file is place in 1 of 3 user share folders.

    2. Depending on which folder i put it in, a different preset would be run on handbrake and it would be re-encoded.

    3. The output file would go into a destination folder depending on the initial folder.

    4. a clean up would be done to remove the original file. if it can't be re-encoded, or it fails, said file goes into a rejected folder so i can look at it later. if at all possible, i'd love some sort of log from the handbrake failure. way back in my early dos days, i'd just pipe the output to a 'log'. i'm assuming that's possible as well.

     

    I do something similar in a windows network.  I'm re-encoding some things in HEVC, so I'm using my Solidworks workstation at night.  I wrote a powershell script that:

     

    • Looks in a source folder and subfolders and makes a list of files
      Compares it to the destination folder and subfolder to create a list of files to be converted
      Passes each filename to Handbrake CLI along with the encode parameters.  Output is set to the destination folder
      Checks to be sure there is at least one hour before 'work time' starts
      Continues to the next file

     

    Really not much to it.  When you get a Handbrake docker installed I can't see why you can't do the same in bash

     

     

  14. The last time I compared both, when Avatar just came out, Plex had a hard time with it, no issue whatsoever on JRiver.

     

    Has Plex client caught up ? Is it up to par now (Feb/Mar 2017) ?

     

    That's really strange, because my plex server uses a $150 cpu/mb bundle I picked up at Microcenter, and can easily handle 3-4 streams.  Or is the Win 10 client you mentioned  virtualized?

  15. You must let unRAID format any disk it will use for cache or array so if your existing disks have contents you will have to consider that.

     

    Yeah, I saw that.  I have an external 6TB drive to aid in the process, and everything is backed up to Amazon Cloud just in case of disaster.  It will only take 80-90 hr's to restore  ;D

     

     

×
×
  • Create New...