Jump to content

LSL1337

Members
  • Posts

    147
  • Joined

  • Last visited

Posts posted by LSL1337

  1. What's the usual memory usage by the this docker image for you guys?

    for me, it is over 1GB, even when i'm not running anything.

    seed a few shows for a few days, I delete them, but the docker is still at well over 1 gig of usage (docker stats command)

    is this normal?

    shouldn't the usage go down after a while?

     

    thanks!

  2. This morning I found the following lines in the log:

     

    Quote

    Apr 23 02:30:20 LSL-NAS kernel: BUG: unable to handle kernel NULL pointer dereference at 0000000000000038
    Apr 23 02:30:20 LSL-NAS kernel: IP: tcp_push+0x4e/0xee
    Apr 23 02:30:20 LSL-NAS kernel: PGD 0 P4D 0 
    Apr 23 02:30:20 LSL-NAS kernel: Oops: 0002 [#1] PREEMPT SMP PTI
    Apr 23 02:30:20 LSL-NAS kernel: Modules linked in: macvlan xt_nat veth ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 iptable_filter ip_tables nf_nat xfs md_mod nct6775 hwmon_vid bonding x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel pcbc aesni_intel aes_x86_64 crypto_simd glue_helper cryptd i2c_i801 i2c_core intel_cstate alx intel_uncore ahci libahci intel_rapl_perf video mdio backlight button acpi_cpufreq
    Apr 23 02:30:20 LSL-NAS kernel: CPU: 3 PID: 12092 Comm: deluged Not tainted 4.14.26-unRAID #1
    Apr 23 02:30:20 LSL-NAS kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./H87M, BIOS P2.60 01/08/2016
    Apr 23 02:30:20 LSL-NAS kernel: task: ffff8801af568e00 task.stack: ffffc9000140c000
    Apr 23 02:30:20 LSL-NAS kernel: RIP: 0010:tcp_push+0x4e/0xee
    Apr 23 02:30:20 LSL-NAS kernel: RSP: 0018:ffffc9000140fc70 EFLAGS: 00010246
    Apr 23 02:30:20 LSL-NAS kernel: RAX: 0000000000000000 RBX: 00000000000005a0 RCX: 0000000000000000
    Apr 23 02:30:20 LSL-NAS kernel: RDX: 0000000000000000 RSI: 0000000000004040 RDI: ffff8800b47e0000
    Apr 23 02:30:20 LSL-NAS kernel: RBP: 0000000000000000 R08: 00000000000005a0 R09: ffffffff8151db00
    Apr 23 02:30:20 LSL-NAS kernel: R10: ffff8800b47e0158 R11: 0000000000000000 R12: ffff8800b47e0000
    Apr 23 02:30:20 LSL-NAS kernel: R13: 0000000000000000 R14: ffff88006a620200 R15: 00000000ffffffe0
    Apr 23 02:30:20 LSL-NAS kernel: FS:  000014f523d5bae8(0000) GS:ffff88021e380000(0000) knlGS:0000000000000000
    Apr 23 02:30:20 LSL-NAS kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    Apr 23 02:30:20 LSL-NAS kernel: CR2: 0000000000000038 CR3: 00000001bbd00003 CR4: 00000000001606e0
    Apr 23 02:30:20 LSL-NAS kernel: Call Trace:
    Apr 23 02:30:20 LSL-NAS kernel: tcp_sendmsg_locked+0xa53/0xbac
    Apr 23 02:30:20 LSL-NAS kernel: tcp_sendmsg+0x23/0x35
    Apr 23 02:30:20 LSL-NAS kernel: sock_sendmsg+0x14/0x1e
    Apr 23 02:30:20 LSL-NAS kernel: ___sys_sendmsg+0x1ab/0x229
    Apr 23 02:30:20 LSL-NAS kernel: ? sock_poll+0x6d/0x76
    Apr 23 02:30:20 LSL-NAS kernel: ? ep_send_events_proc+0xaa/0x163
    Apr 23 02:30:20 LSL-NAS kernel: ? seccomp_run_filters+0xdc/0x106
    Apr 23 02:30:20 LSL-NAS kernel: ? ep_scan_ready_list.constprop.2+0x17a/0x19a
    Apr 23 02:30:20 LSL-NAS kernel: ? __seccomp_filter+0x26/0x1c5
    Apr 23 02:30:20 LSL-NAS kernel: ? __sys_sendmsg+0x3c/0x5d
    Apr 23 02:30:20 LSL-NAS kernel: __sys_sendmsg+0x3c/0x5d
    Apr 23 02:30:20 LSL-NAS kernel: do_syscall_64+0xfe/0x107
    Apr 23 02:30:20 LSL-NAS kernel: entry_SYSCALL_64_after_hwframe+0x3d/0xa2
    Apr 23 02:30:20 LSL-NAS kernel: RIP: 0033:0x14f523d02f33
    Apr 23 02:30:20 LSL-NAS kernel: RSP: 002b:000014f523d5ac28 EFLAGS: 00000246 ORIG_RAX: 000000000000002e
    Apr 23 02:30:20 LSL-NAS kernel: RAX: ffffffffffffffda RBX: 000014f523d5bae8 RCX: 000014f523d02f33
    Apr 23 02:30:20 LSL-NAS kernel: RDX: 0000000000004000 RSI: 000014f523d5ac78 RDI: 0000000000000043
    Apr 23 02:30:20 LSL-NAS kernel: RBP: 000000000000002e R08: 0000000000000000 R09: 0000000000000000
    Apr 23 02:30:20 LSL-NAS kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000004000
    Apr 23 02:30:20 LSL-NAS kernel: R13: 000000000000003c R14: 000014f523d5b1f8 R15: 0000000000000000
    Apr 23 02:30:20 LSL-NAS kernel: Code: d0 75 02 31 c0 41 89 f3 41 81 e3 00 80 00 00 74 1a 44 8b 8f 68 05 00 00 41 d1 e9 44 2b 8f 6c 06 00 00 44 03 8f 74 06 00 00 79 10 <80> 48 38 08 8b 8f 6c 06 00 00 89 8f 74 06 00 00 40 80 e6 01 74 
    Apr 23 02:30:20 LSL-NAS kernel: RIP: tcp_push+0x4e/0xee RSP: ffffc9000140fc70
    Apr 23 02:30:20 LSL-NAS kernel: CR2: 0000000000000038

     

    Deluge wasn't responding this morning (webui), so I restarted the docker, and it was fine. I'm not sure both are related, but deluge is mentioned in the above log.

     

    Any idea? I haven't seen this before, and I never had problem with the deluge docker not responding.

    I updated to 6.5.0 late last week. I was on 6.4.x before that.

     

    thank you!

  3. what am I doing wrong.

     

    I have the official plex docker.

    i stopped it, i want to try to create a seperate one, with this image.

    i stop the main one, so it wouldn't conflict with this in host mode.

     

    when i start up this docker, i go to the local IP:32400, I log in, and there is no server set-up screen, only the normal plex web interface, which is looking for my offline server

    WTF?

    i started a new server, why cant it start to set it up? (i have plexpass, so it should be possible to have multiple servers under the same plex.tv account, but it doesn't matter, since i cant claim the server with a brand new plex account either...)

  4. ok, i was the stupid one, but this plug-in really ****** me over tonight.

    listed few old dockers which i didn't use for a while

     

    binhex-sonarr 

    etc

     

    but it was pointing to the linxserver-sonarr folder (i guess the old binhex docker used the same one I use currently)

    now my 'prod' sonarr is GONE.

     

    ofc it's my fault, but why did it list a folder, if it belonds to az in use application?.............

    :SIGH:

    • Like 1
  5. I have a similiar problem.

     

    After I edited dockers, (maybe i messed up something, and it didn't start) they went missing, and 'became' an orphan image.

    I've been using unraid for over a year now, it never happened in 6.3.x

    few days ago I updated to 6.4.1, and it started happening (could be unrelated)

     

    what could cause this?

     

    what is an orphan image anyway? why can't I just edit the xml, it wasn't from a template

  6. yeah, it would be great if implemented.

    i don't want to run a VM, so i can host a drive via isci on a storage OS, lol

     

    it's a 2 year old topic, probably never will happen...

     

    yeah annoying, i can't install to SMB drives in windows in some cases...

  7. On 2017. 11. 18. at 2:20 PM, Squid said:

    Without thinking too much:

     

    
    #!/bin/bash
    hdparm -C /dev/sdb >> /mnt/user/sharename/status.txt  #change path to suit
    smartctl -a /dev/sdb | grep Temperature >> /mnt/user/sharename/status.txt
    #repeat for more disks
    
    sensors | grep Fan >> /mnt/user/sharename/status.txt
    

    And running at whatever schedule you choose.

    Thank you very much!

  8. I'm a linux noob, but is it possible to write a scipt, which logs my disk status (hdparm -C /dev/sdX) (spin up/spin down) every or 15 or 60 minutes, and writes the results to a file. (maybe extend it with a HDD temp as well, maybe even fan rpm?)

     

    preferably all the results (from every different run) in the same file, or seperate disks to seperate files maybe?

    ( i know i can see a spin down event in the extended log, but it doesn't tell me, when a disk was spun up or down for example)

     

    thank you very much!

  9. so, something weired happened, I'm not sure they are connected.

    Yesterday I noticed there is a new release for radarr, And I went and checked my docker, System/updates, but it didn't show anything there, the page wouldn't load. I pressed restart on the system page, and still nothing showed up, but radarr was working.

    hmm, though nothing of it.

     

    Today I can see, that the mono process is 100% hammering my CPU (on 1 thread), since last night, around the time, I checked the update page and restarted radarr (not the docer)

     

    so now I restarted the docker itself, mono cpu load went back to normal. Still the updates page didn't load, after radarr restart, the cpu is 100%.

     

    So, Should I not use the restart button in radarr while I'm running it in docker? Or is this a bug, and related to the not loading updates page.

    Is the updates page loading for you guys?

    I get error in the logs :"Request Failed: Sequence contains more than one matching element"

     

    Even at 100% cpu, the app still works.

     

    thanks!

  10. 8 hours ago, tturkey said:

    Is there a way to upgrade to pre-release of va-api? Would it be worth it? I'm thirsty for hw transcode :P
    Also, first post, so thank you to everyone who posts here it has helped me a lot.

    since everything is possible, I'm sure you can somehow put it into the plex docker, but it's not straigforward, like 2 commands in ssh.

    check the plex forums, I think some guys tried it, but if you are a linux beginner, It's probably over our heads

  11. 4 hours ago, akira1984 said:

    I've also heard something about needing better drivers that may help but does this only have an effect on intel gpus for transcoding? I have an Xenon 1225 v3 and am currently using that to hardware transcode but was curious if anyone knew if buying say a 1050ti to transcode being newer if that would produce better quality transcodes or if it would be the same due to that driver issue.

    I'm no expert, but as far as i know you wouldn't be better off eventually, well, even worse.

     

    on geforce (consumer) cards there is a transcode limit of 2, so you can't transcode more than that at the same time.

     

    on these intul GPUs you can do 5-6-7 1080p transcodes with minimal cpu usage, BUT at the moment, on linux they look like sh1t. intel is working on the new drivers (i think), and hopefully they will improove the quality, to be on par with windows, which is decent. there is no eta, it could be weeks, months, or years.

    2 things i'm semi sure about:

    - intel QS is ok on windows (anyone can try it I guess, and decide for themselfs) and there is no stream limit (nvidia will only do 2 for sure, even on a 600 dollar 1080ti)

    - intel is working on a fix, there is a github for it with plenty of recent commits. (but will it fix this issue? idk)

     

    so there is hope. I think this feature was in beta since christmas.

    You used plex for months/years without it.

    Basically it works right now, it's just suboptimal.

    Hopefully a little more wait will improve it even further.

    On the bright side, you don't have to do anything, one update will just fix it (hopefully). keep checking the changelogs

     

     

    • Upvote 1
  12. 3 minutes ago, Endda said:

    I see Plex has enabled hardware acceleration in their latest build

     

     - https://support.plex.tv/hc/en-us/articles/115002178853-Using-Hardware-Accelerated-Streaming

     

    I just tested this out with the latest version but can't seem to get it use the hardware. The article says some NAS devices could have issue with the feature, is that what is happening here?

    yes, it works, you just need a few extra commands on unraid, and in the docker, check back 2-3 pages. I'll write up a noob 3 line guide in a few days.

    • Like 1
  13.  

    1 minute ago, zin105 said:

    Ok I lost all interest in HW transcoding. I tried transcoding to 720p 4mbps and there's a lot of artifacting every once in a while.

    Don't give up just yet, it works much better on windows.

     

    We just need updated(meaning better) linux drivers for windows, but it could take months, or even years....

    so eventually...

  14. 30 minutes ago, saarg said:

     

    It works without priviliged mode if you change permission on /dev/dri to nobody:users. 

    when I tried it, (putty portable), it detected the (spacebar) before and after 'nobody:users' as an extra character 'ú' so it was 'únobody:usersú', and the command did not run, so I only managed the 777 command.

    today i'll try it in the GO script, maybe it works there.

    do I still need the 777 line as well?

     

    what about i915. Isn't i965 better/newer, or is that a whole other thing?

     

    thanks!

  15. 10 hours ago, aptalca said:

     

    What are you encoding from and to? 

     

    I just tested a stream and it looked pretty good. Guardians of the galaxy 2 10+mbps 1080p hw transcoded to 4mbps 720p on a G4600 (hd630) kaby lake. 

     

    I noticed that sometimes when I first start a stream, it looks like a 240p YouTube stream for about 30-60 sec and then it magically fixes itself. 

    from 1080p x264 (15Mbit) to plex 8mbit (I guess h264)

     

    it starts at 240p for me as well, but after 2 sec, it's ok. but every 10-20 seconds, when there is proper movement, it goes back to youtube 240p levels of macroblocking.

    i'm gonna try it out on windows with nvidia gpu, whether or not it's the same.

    is it possible, that I have old drivers for the gpu or something? I'm on latest unraid (no beta)

    I'm on 4370T ( HD 4600)

     

    EDIT:

    Ok, IT's seems like it's a linux thing. (https://forums.plex.tv/discussion/290243/hardware-transcoding-linux-artefacts#latest)

    The drivers are not very good on linux for intel it seems.

    Maybe they'll update it. There is a 2.0 pre release (few days ago).

    Hopefully it will be better, and plex will include it in the docker. (right now 1.8.1 is in the container, and it overrides the OS version (as per my basic understanding after a few hours of digging around).

  16. 2 hours ago, bonienl said:

     

    You can start and stop cache_dirs script using the following (the same command is used when starting and stopping the array):

    
    /usr/local/emhttp/plugins/dynamix.cache.dirs/scripts/rc.cachedirs start|stop

    This is never happening to my system, so it would be interesting to find out what is causing this.

    it happpens around once a month. what should I look for, when it happens again (for troubleshooting)?

    basic system log is enough?

  17. so HW transcode is OK, but when the encoder runs out of 'bandwith', everything goes to shit?

    only solution is brute force (with more bitrate?), but for remote media streaming it's not viable, cos the only reason for the transcode, is to save bandwith.

     

    I can't believe how people find this usable...

    the picture above is not even the worst I've seen in the past 5 minutes.

    and when there is a scene change... No B-frames or anything?

    whatever, I'm getting off topic

    again, HUGE tanks for the linux help!

  18. It works for me in priviliged mode only, but the video quality is HORRIBLE, I mean at 8mbit 1080p transcode from bluray remux source,

    it usually good, but without much macro blocking

    BUT, every 5 second the quality changes to youtube 360p, or lower.... for a few seconds, than it back to passable (still little lower then CPU 8Mbit, but okish)

    than back to youtube 240p...

    https://www.dropbox.com/s/qqio89866qs5vf0/8mbit.png?dl=0

     

    can this be a driver issue?

     

    I wouldn't recommend this feature to anybody like this...

    I've read about GPU transcode being slightly inferior to CPU transcode, and i'm bit of a video quality nerd, but this is beyond usable imho.

     

    btw, any downsides to running the docker priviliged?

    anyone managed to get hw transcode to work without priviliged mode?

     

    thanks

  19. On 2017. 10. 01. at 1:23 AM, 1812 said:

     

    I stopped using it for that reason.

     

    so any other plug-in for the functionality?

    Do most people don't use cache_dir?

    i like to keep my disks spun down, i VERY rarely use most of them.

     

    cheers

×
×
  • Create New...