Jump to content

nadbmal

Members
  • Posts

    33
  • Joined

  • Last visited

Posts posted by nadbmal

  1. Very annoying that OpenVPN-as is one of the most important/critical dockers I run and its the one that always randomly breaks and I don't notice it until it's too late when I'm not home and need to remote in. 

    Changing the repo to linuxserver/openvpn-as:2.8.8-cbf850a0-Ubuntu18-ls122 according to the comment above fixed it anyway. Hopefully it'll stay fixed since that version number/repo adress shouldn't change.

  2. In 2018 or so when I set up my Ryzen 1700X based Unraid server I ran into an issue where the computer would randomly lockup after being idle. Somewhere I read that disabling C6 states by putting

    /usr/local/sbin/zenstates --c6-disable

    into my /config/go file would fix it and sure enough it did. It's been rock solid since then, I've gotten over >100 days uptime multiple times, only restarting for Unraid updates basically. 

     

    But I believe this disables some kind of power saving features of the CPUs, so i'm wondering if it's ok to remove this line nowadays or if it's still a necessity? Ryzen on Linux was kinda new when I built this machine, but maybe it has matured by now?

     

    Would be nice to save a couple of watts... on the other hand "if it aint broke dont fix it" etc.

    • Like 1
  3. 2 hours ago, nick.longworth said:

    I cannot get this docker to run.  I have left the template at it's default for all of the IP settings and pointed the data to a local share. Looking at the log I am getting an error on the last line of the log snippet below related to nginx permissions or missing file/dir.  I am not sure what I should be looking at to resolve this.

     

    Executing hook /hooks/entrypoint-pre.d/19_doc_root_setup
    /var/www/html already exists.
    Setting document root to /var/www/html
    Executing hook /hooks/entrypoint-pre.d/20_perms_check.sh
    Running fast permissions check
    Fast permissions check successful, if you have any permissions error try running with -e FORCE_PERMS_CHECK = true
    Executing hook /hooks/entrypoint-pre.d/20_ssl_setup
    Not enabling SSL as neither key nor cert provided.
    Executing hook /hooks/supervisord-pre.d/10_config_check.sh
    checking Bind9 config
    Executing hook /hooks/supervisord-pre.d/20_test_files_setup
    Checking if /var/www/html is empty - Directory not empty.. don't touch content
    Executing hook /hooks/supervisord-pre.d/21_cleanup_log_files
    Cleaning up log files older than 3560 days
    Executing hook /hooks/supervisord-pre.d/50-sniproxy_config.sh
    Setting Upstream DNS to 1.1.1.1
    Executing hook /hooks/supervisord-pre.d/90-bind_config.sh
    Configuring Bind...
    Executing hook /hooks/supervisord-pre.d/90-nginx_config.sh
    Configuring Nginx...
    chown: cannot access '/var/log/nginx/*': No such file or directory
    ERROR: hook /hooks/supervisord-pre.d/90-nginx_config.sh} returned a non-zero exit status '0'

     

    2 hours ago, C_James said:

    I was about to send the same thing!

    Had the same problem. You can fix it for now by going to the "Nginx Log" folder and creating a file there (it errors because the folder is empty). 

    I.e. on your Unraid machine, open up a Terminal/SSH and cd to /mnt/user/appdata/lancache-bundle/log/nginx and create a file in there ("touch test" for example will create a blank file called test), then it works.

  4. I can't rollback to an older working version, I put 

    linuxserver/letsencrypt:0.34.1-ls25

    into the Repository field but I just get:

     

    root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='letsencrypt' --net='bridge' --privileged=true -e TZ="Europe/Berlin" -e HOST_OS="Unraid" -e 'EMAIL'='(redacted)' -e 'URL'='(redacted).duckdns.org' -e 'ONLY_SUBDOMAINS'='false' -e 'DHLEVEL'='2048' -e 'VALIDATION'='http' -e 'DNSPLUGIN'='' -e 'SUBDOMAINS'='www,' -e 'PUID'='99' -e 'PGID'='100' -p '43666:80/tcp' -p '43667:443/tcp' -v '/mnt/user/appdata/letsencrypt':'/config':'rw' 'linuxserver/letsencrypt:0.34.1-ls25' 
    /usr/bin/docker: invalid reference format.
    See '/usr/bin/docker run --help'.
    
    The command failed.

    Anyone else or is it something on my end?

     

    EDIT: I fixed it by reverting to an even older version:

    linuxserver/letsencrypt:0.34.0-ls24

    I got the version number from here: https://github.com/linuxserver/docker-letsencrypt/releases

  5. 7 minutes ago, MowMdown said:

    @nadbmal that's because you're not mapping directly to /tmp you're mapping to /tmp/plexram (don't do this) Plex cannot generate /plexram once it's deleted. That's why in your situation when you delete that directory it fails to transcode. it's looking for a folder that doesn't exist (docker is creating this path not plex)

     

    Just use /tmp as the transcode directory Plex will autogenerate /Transcode/Sessions every time on its own when a transcode is needed. you will never run into that issue doing it the proper way I described here: 

     

    I don't wanna map /transcode straight to /tmp incase Plex sniffs around in there or if it ever does a "rm -rf *". Mapping it to its own subfolder is safer.

  6. 1 hour ago, MowMdown said:

    Not for me, it creates both. I tested it by deleting both while I was transcoding, then a second later both directories were auto-generated again and the session resumed.

     

    If you map only "/tmp" that is. 

    Here's how to reproduce on my end (it simulates a reboot):

     

    - Have guest /transcode mapped to host /tmp/plexram

    - Stop the docker

    - Remove the folder /tmp/plexram

    - Start the docker. /tmp/plexram will be created again.

    - Try playing something that requires a transcode. It will only keep on spinning.

    - Create the folder "Transcode" inside /tmp/plexram

    - Now it works

     

    I'm attaching a screen recording of this.

     

    I don't wanna map /transcode straight to /tmp incase Plex sniffs around in there or if it ever does a "rm -rf *". Mapping it to its own subfolder is safer.

     

     

    • Like 1
  7. 2 hours ago, Spies said:

    After rebooting my server, I had to change the mapping back to the /mnt/user/transcode share and then back to /tmp/transcode in order to get anything to play.

     

    Any particular reasoning for this?

    I've run into that bug too. It fails because there isn't a folder called "Transcode" in /tmp/transcode after a reboot since the RAM was wiped.

    So you just need to create that folder and it works. You could probably automate it by running "mkdir /tmp/transcode/Transcode" at boot. 

  8. So I've been testing more and I've come to the conclusion that it is the client one of my users are using that is the problem: Chromecast. Whenever he plays something via the Chromecast it buffers the whole movie into RAM almost immediately, he can watch for 15 mins or so and then the whole movie is in RAM and gets killed.

    I had him try using the web player on his computer, and then it worked fine, it only buffered a couple of % ahead of where he was, like it should.

     

    chrome_2018-12-18_19-52-18.thumb.png.5318ed35a09436f888d1b70226a179e3.png

     

    This picture pretty much sums it up. Look at the buffer bars.

     

    EDIT: I have narrowed it down even further, and come to the conclusion that it is the transcoding of subtitles that is causing the problem, atleast when transcoding to ASS format.

    chromecast-bug.thumb.png.e97b87de635d5df50ac7cb62d741c07a.png

     

    EDIT 2: And you can probably narrow it down even further to only being a issue with Chromecast users, because it transcodes to a MKV container. I tried with the web player in Chrome, which also uses ASS subtitles, but a MP4 container and then I can see the transcoder is throttled.

  9. 1 hour ago, Hoopster said:

     

    Any chance you have some bad RAM?  Have you noticed any other potential RAM-related issues or is it just Plex transcoding in RAM? Perhaps you are seeing the issues because Plex is trying to move things around in RAM if some is bad.

     

    Even if it is the latter, a 24-hour memtest should eliminate or confirm RAM as the problem.

     

    Perhaps you are seeing the issues because Plex is trying to move things around in RAM if some is bad.

     

    This is a long shot, but, there does not appear to be an issue with transcoding in RAM in general.

    I admit I havent ran memtest for 24 hours on the RAM, but i have done 2 hours and it was fine. I would prefer not to run memtest any more because then I need to connect a GPU (the Ryzen 1700x has no integrated...) and the server needs to be offline, and I get upset when I can't access my Plex on the iPad before i sleep...

    I've had previous experience with bad RAM and in those cases Memtest almost immediately got an error.

    Also everything else seems fine (I got 45 days uptime on 6.6.1 before I updated to 6.6.5)...

     

    Is there some "lighter" version of memtest I could run without taking the server offline? maybe some script that runs in /tmp/ and just moves big files around verifying their integrity to simulate Plex transcoding?

  10. On 11/30/2018 at 2:56 PM, IamSpartacus said:

     

    I've not noticed this issue and I have 8-10 people on my server every night.  Are you sure you have adequate free RAM to use for transcoding?

     

    18 hours ago, Hoopster said:

     

    As others have mentioned, I also have noticed no issues with RAM transcoding.  I only have, at most, 3-4 users accessing my server simultaneously and it does have 32GB RAM with a good amount free, so, perhaps I am not bumping up against any limits.

     

    I have 16 GB, with around 30% in use typically (looking at the dashboard page in Unraid).

    And all I know is that whenever I set /transcode/ <--> /tmp/ alot of stuff just breaks. I see users restarting movies constantly (I'm guessing they're getting a transcoder error). If I reboot then Plex  wouldn't start transcoding anything immediately because it wanted to remove some left over Session files but since they were in /tmp/ they weren't there so it complained about that (not really sure how it gets solved, it just starts working eventually):

     

    Dec 01, 2018 16:55:25.314 [0x153c166d5800] ERROR - Transcoder: Error cleaning old transcode sessions: boost::filesystem::directory_iterator::construct: No such file or directory: "/transcode/Transcode/Sessions"

     

    One of the most absurd things I saw was someone direct streaming a movie, and it looked like it cached the whole video in the RAM and then quit the stream (I saw the Tautulli buffer bar fill up really fast and then disappear, and then re-appear when he restarted the movie...)

     

    Then whenever I just point it back at the SSD everything works wonderfully...

     

  11. With Airvideo, has anyone observed a memory leak when its generating thumbnails?
    I'm using AirvideoHD for iOS and tvOS, and whenever I go into a folder that has videos that hasn't been thumbnailed/cached before I see the memory usage of the docker jump up and never go down again. 

    Workaround is to keep track of memory usage and browse through all your share folders, and then restart the docker when you see memory getting a bit uncomfortable, and repeat. Once all thumbnails are made you can leave the docker running... until you add a bunch of videos again

  12. 6 hours ago, Maor said:

    I am building with 1700X and ASUS prime X370-pro. Can you recommend me any tested ECC RAM?

    I know, that I need two sticks, as there are some users reporting ECC wont work with sigle DIMM. I plan 8-16GB RAM.

    I only use one stick of ram, and everything I've seen reports it as ECC being enabled (both Unraid and Memtest).

    Kingston KVR21E15D8/8

     

     

    ram.PNG

    • Like 1
  13. 42 minutes ago, MacGeekPaul said:

     

     

    Thanks. I also ran into the C6 states bug fairly fast in my testing (I've been running the trial version on it) but once I disabled it in BIOS it seems to have gone away. I'm gonna add that zenstates thing too just to be sure.

     

    I think I'll do the migration soon then.

  14. So is Unraid + Ryzen stable nowdays? I'm thinking about moving over from my old box.

     

    I've put together a rig with a 1700X CPU, 8 GB ECC RAM and a Aorus (Gigabyte) AX370-Gaming K7 motherboard. I went with that motherboard because it has a M.2 slot, 8x SATA ports and supports ECC RAM.


    My current Unraid/Plex Server is a i3-6100, 4 GB regular RAM and a Gigabyte H270N-WIFI which is getting a bit slow for 4K transcodes in Plex (the 1700X does it much better obviously).

     

    I currently use 6x 4 TB disks + 1x 250 GB NVMe as cache. Since this new motherboard has 8x SATA ports I will eventually add more disks.

     

    So should I move over today or should I hold off for a while? Are there any gotchas with a Ryzen based Unraid server?

  15. 9 minutes ago, jonathanm said:

    Since all of your drive bays are full, you don't have the option of adding parity 1 back as a larger drive. You are stuck doing the parity swap procedure that you described. The new larger drive would be assigned to parity 2, and the current parity 2 drive would be assigned in place of the failed drive. The system will then copy the parity drive bits to the new drive, once that is complete it will use all your drives to rebuild the data from the failed drive to the old parity 2 drive. This is described in the docs. https://lime-technology.com/wiki/The_parity_swap_procedure

     

    You are painted into a corner by not leaving an empty bay, I can't think of a way to get the parity type switched to parity 1 without rebuilding it and going without protection for that interval. Going unprotected is not the end of the world, if parity 2 vs 1 really bothers you. As long as all your drives are verified healthy with long smart tests and a clean parity check, I see no reason not to rebuild other than the inherent risk of messing with a currently working system.

     

    I've been meaning to get a new motherboard and bigger case so this is just more motivation to do it. Someday.

  16. 1 minute ago, jonathanm said:

    Not cosmetic, but won't cause issues. Parity 1 and 2 use different computations, so they are not interchangeable. If you really want it to be parity 1, you will need to set a new config and change its assignment. You will be without parity protection until it is successfully rewritten as parity 1, so I wouldn't bother changing it.

     

    Next time you run out of space, you could assign the new (probably larger) drive as parity 1, let it build, check it, then reassign your current parity 2 as a data drive.

     

    Ah ok, kinda scary that it's not just cosmetic but good that it shouldn't matter (apart from my OCD going slightly insane).

     

    When I run out of space, can I replace any of the drives (all 6 ports/bays are full) or do I need to replace a specific one?

    On that note, I assume if one of the current disks were to go bad, I couldn't replace the failed drive with a bigger one and then make that the new parity, and then make the old parity a data disk?

  17. On 2017-12-27 at 12:47 PM, johnnie.black said:

    Yes, there's no rebuild.

     

    If you want, disk will be cleared before it can be used.

     

    So now I've done it and changed one of the parity drives to a data one.  I decided to keep "Parity 2" as a parity disc, but even though it's the only Parity disc now it's still named "Parity 2". 

    Is this just cosmetic or can it cause problems later on? (I'm pretty sure it's cosmetic only but I just wanna be sure...)

    If it's cosmetic, is it possible to fix easily?

     

    (Disk 5 is clearing)

    lookslikethis.PNG

  18. I currently have two parity disks, but for a few reasons I am considering maybe eventually going down to just 1 (unimportant data, only have 6 drive bays, etc) down the line when I start running low on space (this is not something I would do today, but maybe next year)

     

    Is it possible to do? Is it as easy as just stopping the array, unassigning one of the parity disks, then start (maybe rebuild?), then stop again, and assign the disk as a data disk?

  19. 7 minutes ago, johnnie.black said:

    Thta's normal with v6.3.5, NVMe SMART support is better on v6.4.

     

    Those NVMe devices use a Marvell controller, probably not the best option for unRAID with the known issues with Marvell controllers.

    Yeah I just don't trust it anymore.

    Since it has a 5 year warranty, I'm gonna see if I can return it and get my money back and go for a Samsung 960.

  20. I have the same exact disk, attached straight to the Gigabyte GA-H270N-WIFI motherboard, and this morning when I woke up I had a notification saying that my Cache disk was missing. It had a green ball next to it on the Main page, but when I went into its page it said that the drive was "spun down", and when I hit the Spin Up button, nothing happened.

     

    Then I did a reboot and nothing happened.

     

    Then I did a power down, and then power up again, and now it's back.

     

    But seeing as you had the same exact issue with this exact drive I'm bailing on it. Gonna run the mover and hope it gets my Plex install out of there, so I don't have to refresh all metadata etc.

     

    I ran diagnostics but all the SMART report says for it is: "Rea d NVMe SMART/Health Information failed: NVMe Status 0x4002"

    tower-diagnostics-20171205-1024.zip

×
×
  • Create New...