Bydgoszcz

Members
  • Posts

    28
  • Joined

  • Last visited

Posts posted by Bydgoszcz

  1. I need a little help with HomebridgeWithWebGUI as I'm trying to install the plugin for Envisakit. - https://github.com/andylittle/envisakit

    The problem is that it requires something extra outside of the plugin and extra commands.

    This successfully executed.

    # Clone the repository
    $ git clone 'https://github.com/andylittle/envisakit.git'
    $ cd envisakit

     

    However this is where I get stuck:

    # Create virtual env and install packages
    $ virtualenv venv
    $ source venv/bin/activate
    $ pip install -r requirements.txt

     

    Is there a way to get virtualenv into the docker? or am I limited?

  2. Is anyone else having memory issues? I just restarted Unifi and my memory usage went from 91% down to 54%, is this normal with 8GB of ram. I know if I didn't restart Unifi I'm pretty sure it would have froze my system after maxing out the memory.

     

    When I looked at top.txt it shows mongod using 38.7.

     

    Is there a setting I should check? either in Unraid or Unifi?

  3. I've been running into memory issues on my system and I couldn't figure out what was eating it. Today I updated the Unifi docker and my memory usage dropped by 35% so now I'm wondering if something within Unifi is slowly bloating until my system crashes.

     

    Any thoughts? What would I need to share for analysis?

  4. On 1/31/2019 at 8:54 PM, trurl said:

    Are any of your dockers mapping to an Unassigned Device? The path to an Unassigned Device is actual persistent storage only when the device is mounted. If it isn't mounted for some reason, that path would be in RAM.

    Here are all the run commands for each docker:

     

    Sonarr:

    root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='binhex-sonarr' --net='bridge' -e TZ="America/Denver" -e HOST_OS="Unraid" -p '8989:8989/tcp' -p '9897:9897/tcp' -v '/mnt/cache/Docker/Apps/sonarr':'/config':'rw' -v '/mnt/cache/Downloads/':'/downloads':'rw' -v '/mnt/user/TV/':'/media':'rw' 'binhex/arch-sonarr' 
    d6a182f7fe4a09d7e147ba2eaf78bd4c12be2376bb1b9b982a1b98f1353fc53f

     

    Plex:

    root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='PlexMediaServer' --net='host' --privileged=true -e TZ="America/Denver" -e HOST_OS="Unraid" -v '/mnt/cache/Docker/Apps/plex/':'/config':'rw' -v '/mnt/user/TV/':'/tv':'rw' -v '/mnt/user/Movies/':'/movies':'rw' -v '/mnt/user/Other Videos/':'/other videos':'rw' 'limetech/plex' 
    4c10403504ae4d33742a18cb2b018422d07c90f108aedce547cf98749ee51c9b

    Radarr:

    root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='radarr' --net='bridge' -e TZ="America/Denver" -e HOST_OS="Unraid" -e 'PUID'='99' -e 'PGID'='100' -p '7878:7878/tcp' -v '/mnt/user/Downloads/':'/downloads':'rw' -v '/mnt/user/Movies/':'/movies':'rw' -v '/mnt/user/appdata/radarr':'/config':'rw' 'linuxserver/radarr' 
    ea32d58852133c09b74a447392d08ddae350ad23eed9b03048cf382e743a99b0

    Unifi:

    root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='unifi' --net='bridge' -e TZ="America/Denver" -e HOST_OS="Unraid" -e 'PUID'='99' -e 'PGID'='100' -p '3478:3478/udp' -p '10001:10001/udp' -p '8080:8080/tcp' -p '8081:8081/tcp' -p '8443:8443/tcp' -p '8843:8843/tcp' -p '8880:8880/tcp' -v '/mnt/user/appdata/unifi':'/config':'rw' 'linuxserver/unifi:unstable' 
    a44c3032530ced082f22dd0f0dccea98ed0c65fac13d07d8340c7914ce27b2b1

     

    From what I can tell everything is mapped, but maybe I'm missing something.

  5. Here you go.

    root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='nzbget' --net='bridge' -e TZ="America/Denver" -e HOST_OS="Unraid" -e 'PUID'='99' -e 'PGID'='100' -p '6789:6789/tcp' -v '/mnt/cache/Downloads/':'/downloads':'rw' -v '/mnt/user/appdata/nzbget':'/config':'rw' 'linuxserver/nzbget' 
    7c489af5b33cacd1be1d3326a1e683ddce73c2b68d2cd79a1a47bdc12ecc5593

     

    Sorry about not posting the diagnostic in a new post, figured it would have been better to edit the original post so others would see it and not miss it in a reply or something.

  6. I've had this now happen a couple times and I'm looking for guidance to help me where my memory leak is.

    I had to stop the array and restart it to get everything working and I've updated all my dockers to the latest versions, though I've updated and done this in the past and usually within a week its back to hanging.

     

    Dockers running: NZBget, Sonarr, Radarr, Plex, Unifi, Homebridge

     

    This is the latest system log where it seems things went haywire. Even in the dashboard it showed the CPU pinned at 100%

    Jan 31 18:47:27 VAULT13 root: 
    Jan 31 18:47:27 VAULT13 root: /dev/sdi:
    Jan 31 18:47:27 VAULT13 root:  setting standby to 0 (off)
    Jan 31 18:47:40 VAULT13 emhttpd: Stopping services...
    Jan 31 18:47:58 VAULT13 root: Stopping docker_load
    Jan 31 18:47:58 VAULT13 emhttpd: shcmd (46279): /etc/rc.d/rc.docker stop
    Jan 31 18:48:47 VAULT13 kernel: Plex Media Serv invoked oom-killer: gfp_mask=0x6200ca(GFP_HIGHUSER_MOVABLE), nodemask=(null), order=0, oom_score_adj=0
    Jan 31 18:48:47 VAULT13 kernel: Plex Media Serv cpuset=915332cbdbb2d1e301b8687a28ae08b3d98132bafc9f8584493fbd3a9f46ba96 mems_allowed=0
    Jan 31 18:48:47 VAULT13 kernel: CPU: 6 PID: 24328 Comm: Plex Media Serv Not tainted 4.18.20-unRAID #1
    Jan 31 18:48:47 VAULT13 kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./H97M Pro4, BIOS P2.30 03/07/2018
    Jan 31 18:48:47 VAULT13 kernel: Call Trace:
    Jan 31 18:48:47 VAULT13 kernel: dump_stack+0x5d/0x79
    Jan 31 18:48:47 VAULT13 kernel: dump_header+0x66/0x274
    Jan 31 18:48:47 VAULT13 kernel: ? do_try_to_free_pages+0x28f/0x2e6
    Jan 31 18:48:47 VAULT13 kernel: oom_kill_process+0x82/0x376
    Jan 31 18:48:47 VAULT13 kernel: ? oom_badness+0x19/0xf1
    Jan 31 18:48:47 VAULT13 kernel: out_of_memory+0x3b2/0x3ea
    Jan 31 18:48:47 VAULT13 kernel: __alloc_pages_nodemask+0x8d0/0xa8b
    Jan 31 18:48:47 VAULT13 kernel: ? __radix_tree_lookup+0x6a/0xa3
    Jan 31 18:48:47 VAULT13 kernel: filemap_fault+0x216/0x475
    Jan 31 18:48:47 VAULT13 kernel: __do_fault+0x18/0x4f
    Jan 31 18:48:47 VAULT13 kernel: __handle_mm_fault+0xcb4/0x10aa
    Jan 31 18:48:47 VAULT13 kernel: handle_mm_fault+0x159/0x1a8
    Jan 31 18:48:47 VAULT13 kernel: __do_page_fault+0x271/0x40b
    Jan 31 18:48:47 VAULT13 kernel: ? page_fault+0x8/0x30
    Jan 31 18:48:47 VAULT13 kernel: page_fault+0x1e/0x30
    Jan 31 18:48:47 VAULT13 kernel: RIP: 0033:0x7fcd9c9fe9b0
    Jan 31 18:48:47 VAULT13 kernel: Code: Bad RIP value.
    Jan 31 18:48:47 VAULT13 kernel: RSP: 002b:00007fcd9ab198a8 EFLAGS: 00010202
    Jan 31 18:48:47 VAULT13 kernel: RAX: 00007fcd88c4c400 RBX: 0000000001076a04 RCX: 00007fcd88c4c418
    Jan 31 18:48:47 VAULT13 kernel: RDX: 0000000000000008 RSI: 0000000001076a04 RDI: 00007fcd88c4c418
    Jan 31 18:48:47 VAULT13 kernel: RBP: 00007fcd88c4c400 R08: 0000000000000000 R09: 0000000000366be6
    Jan 31 18:48:47 VAULT13 kernel: R10: 0000000000000000 R11: 0000000000000206 R12: 0000000000000008
    Jan 31 18:48:47 VAULT13 kernel: R13: 00007ffe58dfaddf R14: 00007fcd9ab1a9c0 R15: 00007fcd9b850cb0
    Jan 31 18:48:47 VAULT13 kernel: Mem-Info:
    Jan 31 18:48:47 VAULT13 kernel: active_anon:1798785 inactive_anon:10639 isolated_anon:0
    Jan 31 18:48:47 VAULT13 kernel: active_file:4213 inactive_file:10710 isolated_file:0
    Jan 31 18:48:47 VAULT13 kernel: unevictable:0 dirty:128 writeback:0 unstable:0
    Jan 31 18:48:47 VAULT13 kernel: slab_reclaimable:13455 slab_unreclaimable:26872
    Jan 31 18:48:47 VAULT13 kernel: mapped:29776 shmem:133703 pagetables:7911 bounce:0
    Jan 31 18:48:47 VAULT13 kernel: free:42305 free_pcp:220 free_cma:0
    Jan 31 18:48:47 VAULT13 kernel: Node 0 active_anon:7195140kB inactive_anon:42556kB active_file:16852kB inactive_file:42840kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:119104kB dirty:512kB writeback:0kB shmem:534812kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 4059136kB writeback_tmp:0kB unstable:0kB all_unreclaimable? yes
    Jan 31 18:48:47 VAULT13 kernel: Node 0 DMA free:15896kB min:276kB low:344kB high:412kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15980kB managed:15896kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
    Jan 31 18:48:47 VAULT13 kernel: lowmem_reserve[]: 0 2841 7561 7561
    Jan 31 18:48:47 VAULT13 kernel: Node 0 DMA32 free:69260kB min:50684kB low:63352kB high:76020kB active_anon:2944600kB inactive_anon:152kB active_file:1004kB inactive_file:8072kB unevictable:0kB writepending:0kB present:3086472kB managed:3070720kB mlocked:0kB kernel_stack:928kB pagetables:3896kB bounce:0kB free_pcp:20kB local_pcp:0kB free_cma:0kB
    Jan 31 18:48:47 VAULT13 kernel: lowmem_reserve[]: 0 0 4720 4720
    Jan 31 18:48:47 VAULT13 kernel: Node 0 Normal free:84064kB min:84204kB low:105252kB high:126300kB active_anon:4250540kB inactive_anon:42404kB active_file:14964kB inactive_file:34948kB unevictable:0kB writepending:512kB present:4962304kB managed:4833864kB mlocked:0kB kernel_stack:27648kB pagetables:27748kB bounce:0kB free_pcp:860kB local_pcp:0kB free_cma:0kB
    Jan 31 18:48:47 VAULT13 kernel: lowmem_reserve[]: 0 0 0 0
    Jan 31 18:48:47 VAULT13 kernel: Node 0 DMA: 0*4kB 1*8kB (U) 1*16kB (U) 0*32kB 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (U) 3*4096kB (M) = 15896kB
    Jan 31 18:48:47 VAULT13 kernel: Node 0 DMA32: 385*4kB (UME) 414*8kB (UME) 301*16kB (UME) 114*32kB (UME) 217*64kB (UME) 122*128kB (UE) 50*256kB (UME) 12*512kB (UME) 8*1024kB (UME) 0*2048kB 0*4096kB = 69956kB
    Jan 31 18:48:47 VAULT13 kernel: Node 0 Normal: 160*4kB (UME) 316*8kB (UME) 378*16kB (UME) 31*32kB (UME) 67*64kB (UME) 50*128kB (UME) 66*256kB (UME) 54*512kB (ME) 19*1024kB (UME) 0*2048kB 0*4096kB = 84896kB
    Jan 31 18:48:47 VAULT13 kernel: 148637 total pagecache pages
    Jan 31 18:48:47 VAULT13 kernel: 0 pages in swap cache
    Jan 31 18:48:47 VAULT13 kernel: Swap cache stats: add 0, delete 0, find 0/0
    Jan 31 18:48:47 VAULT13 kernel: Free swap  = 0kB
    Jan 31 18:48:47 VAULT13 kernel: Total swap = 0kB
    Jan 31 18:48:47 VAULT13 kernel: 2016189 pages RAM
    Jan 31 18:48:47 VAULT13 kernel: 0 pages HighMem/MovableOnly
    Jan 31 18:48:47 VAULT13 kernel: 36069 pages reserved
    Jan 31 18:48:47 VAULT13 kernel: 0 pages cma reserved
    Jan 31 18:48:47 VAULT13 kernel: [ pid ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
    Jan 31 18:48:47 VAULT13 kernel: [ 1299]     0  1299     3750      884    65536        0         -1000 udevd
    Jan 31 18:48:47 VAULT13 kernel: [ 1575]     0  1575    56423      632    98304        0             0 rsyslogd
    Jan 31 18:48:47 VAULT13 kernel: [ 1670]     0  1670     2050     1577    57344        0             0 haveged
    Jan 31 18:48:47 VAULT13 kernel: [ 1705]    81  1705     3417      546    65536        0             0 dbus-daemon
    Jan 31 18:48:47 VAULT13 kernel: [ 1714]    32  1714     1339      542    53248        0             0 rpcbind
    Jan 31 18:48:47 VAULT13 kernel: [ 1719]    32  1719     2834     1458    61440        0             0 rpc.statd
    Jan 31 18:48:47 VAULT13 kernel: [ 1754]    44  1754    22898     1089    94208        0             0 ntpd
    Jan 31 18:48:47 VAULT13 kernel: [ 1761]     0  1761      614       22    45056        0             0 acpid
    Jan 31 18:48:47 VAULT13 kernel: [ 1775]     0  1775      631      446    49152        0             0 crond
    Jan 31 18:48:47 VAULT13 kernel: [ 1779]     0  1779      628      346    45056        0             0 atd
    Jan 31 18:48:47 VAULT13 kernel: [ 3680]     0  3680      638      427    45056        0             0 agetty
    Jan 31 18:48:47 VAULT13 kernel: [ 3681]     0  3681      638      431    45056        0             0 agetty
    Jan 31 18:48:47 VAULT13 kernel: [ 3682]     0  3682      638      445    45056        0             0 agetty
    Jan 31 18:48:47 VAULT13 kernel: [ 3683]     0  3683      638      430    49152        0             0 agetty
    Jan 31 18:48:47 VAULT13 kernel: [ 3684]     0  3684      638      439    45056        0             0 agetty
    Jan 31 18:48:47 VAULT13 kernel: [ 3685]     0  3685      638      449    45056        0             0 agetty
    Jan 31 18:48:47 VAULT13 kernel: [ 3714]     0  3714     2273      660    53248        0         -1000 sshd
    Jan 31 18:48:47 VAULT13 kernel: [ 3721]     0  3721      629      393    45056        0             0 inetd
    Jan 31 18:48:47 VAULT13 kernel: [ 3798]    61  3798     5661      851    81920        0             0 avahi-daemon
    Jan 31 18:48:47 VAULT13 kernel: [ 3799]    61  3799     5531       65    77824        0             0 avahi-daemon
    Jan 31 18:48:47 VAULT13 kernel: [ 3808]     0  3808     1685       26    53248        0             0 avahi-dnsconfd
    Jan 31 18:48:47 VAULT13 kernel: [ 3821]     0  3821    71317      854    98304        0             0 emhttpd
    Jan 31 18:48:47 VAULT13 kernel: [ 3847]     0  3847     4347     1160    73728        0             0 ttyd
    Jan 31 18:48:47 VAULT13 kernel: [ 3850]     0  3850    37264      941    65536        0             0 nginx
    Jan 31 18:48:47 VAULT13 kernel: [ 3851]    99  3851    39098     2124    86016        0             0 nginx
    Jan 31 18:48:47 VAULT13 kernel: [19964]     0 19964    48864     1520   409600        0             0 nmbd
    Jan 31 18:48:47 VAULT13 kernel: [19966]     0 19966    68679     4158   557056        0             0 smbd
    Jan 31 18:48:47 VAULT13 kernel: [19969]     0 19969    67332     1430   528384        0             0 smbd-notifyd
    Jan 31 18:48:47 VAULT13 kernel: [19970]     0 19970    67334     1062   528384        0             0 cleanupd
    Jan 31 18:48:47 VAULT13 kernel: [19972]     0 19972    62116     2632   503808        0             0 winbindd
    Jan 31 18:48:47 VAULT13 kernel: [19974]     0 19974    84514    25681   692224        0             0 winbindd
    Jan 31 18:48:47 VAULT13 kernel: [20160]     0 20160     1437      707    49152        0             0 diskload
    Jan 31 18:48:47 VAULT13 kernel: [20226]     0 20226    39486     2930   253952        0             0 php-fpm
    Jan 31 18:48:47 VAULT13 kernel: [20487]     0 20487    36501      173    69632        0             0 shfs
    Jan 31 18:48:47 VAULT13 kernel: [20500]     0 20500   223008     3695   180224        0             0 shfs
    Jan 31 18:48:47 VAULT13 kernel: [20609]     0 20609   332598    18938   425984        0          -500 dockerd
    Jan 31 18:48:47 VAULT13 kernel: [20626]     0 20626   323997     7915   356352        0          -500 docker-containe
    Jan 31 18:48:47 VAULT13 kernel: [21111]     0 21111      917      406    61440        0          -500 docker-proxy
    Jan 31 18:48:47 VAULT13 kernel: [21125]     0 21125      917      406    61440        0          -500 docker-proxy
    Jan 31 18:48:47 VAULT13 kernel: [21138]     0 21138     1891     1096    73728        0          -999 docker-containe
    Jan 31 18:48:47 VAULT13 kernel: [21156]     0 21156     1075       19    49152        0             0 tini
    Jan 31 18:48:47 VAULT13 kernel: [21205]     0 21205    26127     3265   249856        0             0 supervisord
    Jan 31 18:48:47 VAULT13 kernel: [21318]     0 21318     1891     1070    69632        0          -999 docker-containe
    Jan 31 18:48:47 VAULT13 kernel: [21336]     0 21336     1157       16    49152        0             0 sh
    Jan 31 18:48:47 VAULT13 kernel: [21854]     0 21854     1875     1075    69632        0          -999 docker-containe
    Jan 31 18:48:47 VAULT13 kernel: [21872]     0 21872     8500     1250    98304        0             0 my_init
    Jan 31 18:48:47 VAULT13 kernel: [21933]     0 21933    18092      397   180224        0             0 syslog-ng
    Jan 31 18:48:47 VAULT13 kernel: [22010]    99 22010     3996      102    69632        0             0 start.sh
    Jan 31 18:48:47 VAULT13 kernel: [22023]    99 22023  1149879   116592  2678784        0             0 mono
    Jan 31 18:48:47 VAULT13 kernel: [22545]     0 22545     1098       20    49152        0             0 runsvdir
    Jan 31 18:48:47 VAULT13 kernel: [22596]     0 22596     1060       18    53248        0             0 runsv
    Jan 31 18:48:47 VAULT13 kernel: [22597]     0 22597     1060       19    53248        0             0 runsv
    Jan 31 18:48:47 VAULT13 kernel: [22598]     0 22598     1060       19    53248        0             0 runsv
    Jan 31 18:48:47 VAULT13 kernel: [22600]     0 22600     7744       62    98304        0             0 cron
    Jan 31 18:48:47 VAULT13 kernel: [23440]    99 23440     1126       17    53248        0             0 start_pms
    Jan 31 18:48:47 VAULT13 kernel: [24320]    99 24320   282666    21007  1593344        0             0 Plex Media Serv
    Jan 31 18:48:47 VAULT13 kernel: [24360]    99 24360   456887    32615   696320        0             0 Plex Script Hos
    Jan 31 18:48:47 VAULT13 kernel: [24448]    99 24448   134934    13238   544768        0             0 Plex DLNA Serve
    Jan 31 18:48:47 VAULT13 kernel: [24451]    99 24451   134271      874   278528        0             0 Plex Tuner Serv
    Jan 31 18:48:47 VAULT13 kernel: [24562]     0 24562   310059    19266  1581056        0             0 homebridge
    Jan 31 18:48:47 VAULT13 kernel: [24618]     0 24618   306299     7846   831488        0             0 homebridge-conf
    Jan 31 18:48:47 VAULT13 kernel: [24828]     0 24828    62085     1672   499712        0             0 winbindd
    Jan 31 18:48:47 VAULT13 kernel: [25561]     0 25561     2614      401    73728        0          -500 docker-proxy
    Jan 31 18:48:47 VAULT13 kernel: [25569]     0 25569     1875     1069    69632        0          -999 docker-containe
    Jan 31 18:48:47 VAULT13 kernel: [25587]     0 25587       51        1    24576        0             0 s6-svscan
    Jan 31 18:48:47 VAULT13 kernel: [25674]     0 25674       51        1    24576        0             0 s6-supervise
    Jan 31 18:48:47 VAULT13 kernel: [25843]     0 25843       51        1    24576        0             0 s6-supervise
    Jan 31 18:48:47 VAULT13 kernel: [25846]    99 25846  1064587    52603  1122304        0             0 nzbget
    Jan 31 18:48:47 VAULT13 kernel: [26348]     0 26348      917      406    61440        0          -500 docker-proxy
    Jan 31 18:48:47 VAULT13 kernel: [26356]     0 26356     2227     1074    73728        0          -999 docker-containe
    Jan 31 18:48:47 VAULT13 kernel: [26374]     0 26374       49        1    28672        0             0 s6-svscan
    Jan 31 18:48:47 VAULT13 kernel: [26463]     0 26463       49        1    28672        0             0 s6-supervise
    Jan 31 18:48:47 VAULT13 kernel: [26638]     0 26638       49        1    28672        0             0 s6-supervise
    Jan 31 18:48:47 VAULT13 kernel: [26641]    99 26641  1219688   380481  4239360        0             0 mono
    Jan 31 18:48:47 VAULT13 kernel: [26900]     0 26900      917      421    61440        0          -500 docker-proxy
    Jan 31 18:48:47 VAULT13 kernel: [26921]     0 26921     1891     1066    69632        0          -999 docker-containe
    Jan 31 18:48:47 VAULT13 kernel: [26939]     0 26939       51        1    24576        0             0 s6-svscan
    Jan 31 18:48:47 VAULT13 kernel: [27015]     0 27015       51        1    24576        0             0 s6-supervise
    Jan 31 18:48:47 VAULT13 kernel: [27181]     0 27181       51        1    24576        0             0 s6-supervise
    Jan 31 18:48:47 VAULT13 kernel: [27184]    99 27184    68621    18237   593920        0             0 python
    Jan 31 18:48:47 VAULT13 kernel: [27521]     0 27521     3799      980    86016        0          -500 docker-proxy
    Jan 31 18:48:47 VAULT13 kernel: [27534]     0 27534      917      406    65536        0          -500 docker-proxy
    Jan 31 18:48:47 VAULT13 kernel: [27548]     0 27548      917      421    61440        0          -500 docker-proxy
    Jan 31 18:48:47 VAULT13 kernel: [27562]     0 27562     1269      406    65536        0          -500 docker-proxy
    Jan 31 18:48:47 VAULT13 kernel: [27574]     0 27574      917      406    65536        0          -500 docker-proxy
    Jan 31 18:48:47 VAULT13 kernel: [27587]     0 27587      917      406    61440        0          -500 docker-proxy
    Jan 31 18:48:47 VAULT13 kernel: [27601]     0 27601      917      404    61440        0          -500 docker-proxy
    Jan 31 18:48:47 VAULT13 kernel: [27607]     0 27607     2227     1118    73728        0          -999 docker-containe
    Jan 31 18:48:47 VAULT13 kernel: [27625]     0 27625       49        1    28672        0             0 s6-svscan
    Jan 31 18:48:47 VAULT13 kernel: [27700]     0 27700       49        1    28672        0             0 s6-supervise
    Jan 31 18:48:47 VAULT13 kernel: [27897]     0 27897       49        1    28672        0             0 s6-supervise
    Jan 31 18:48:47 VAULT13 kernel: [27900]    99 27900  1901815   160960  3846144        0             0 java
    Jan 31 18:48:47 VAULT13 kernel: [28802]    99 28802   990757   771360  6631424        0             0 mongod
    Jan 31 18:48:47 VAULT13 kernel: [30749]     0 30749    37225      540    77824        0             0 apcupsd
    Jan 31 18:48:47 VAULT13 kernel: [25314]    99 25314     5394      192    77824        0             0 EasyAudioEncode
    Jan 31 18:48:47 VAULT13 kernel: [ 2824]     0  2824    11428       84   131072        0             0 cron
    Jan 31 18:48:47 VAULT13 kernel: [12176]     0 12176    11428       81   131072        0             0 cron
    Jan 31 18:48:47 VAULT13 kernel: [20041]     0 20041     7744       62    94208        0             0 cron
    Jan 31 18:48:47 VAULT13 kernel: [29731]     0 29731     7744       62    94208        0             0 cron
    Jan 31 18:48:47 VAULT13 kernel: [ 6454]     0  6454     7744       62    94208        0             0 cron
    Jan 31 18:48:47 VAULT13 kernel: [15688]     0 15688     7744       62    94208        0             0 cron
    Jan 31 18:48:47 VAULT13 kernel: [20547]     0 20547    40341     3484   262144        0             0 php-fpm
    Jan 31 18:48:47 VAULT13 kernel: [20548]     0 20548    40276     3308   258048        0             0 php-fpm
    Jan 31 18:48:47 VAULT13 kernel: [20550]     0 20550    40341     3478   262144        0             0 php-fpm
    Jan 31 18:48:47 VAULT13 kernel: [20610]     0 20610    40341     3465   262144        0             0 php-fpm
    Jan 31 18:48:47 VAULT13 kernel: [20611]     0 20611     1451      730    53248        0             0 sh
    Jan 31 18:48:47 VAULT13 kernel: [20617]     0 20617    10835     6804   143360        0             0 docker
    Jan 31 18:48:47 VAULT13 kernel: [20656]     0 20656    40341     3485   262144        0             0 php-fpm
    Jan 31 18:48:47 VAULT13 kernel: [20829]     0 20829     1429      700    53248        0             0 sh
    Jan 31 18:48:47 VAULT13 kernel: [20830]     0 20830     1498      789    49152        0             0 rc.docker
    Jan 31 18:48:47 VAULT13 kernel: [20831]     0 20831      632      467    45056        0             0 logger
    Jan 31 18:48:47 VAULT13 kernel: [20896]     0 20896    10635     6149   139264        0             0 docker
    Jan 31 18:48:47 VAULT13 kernel: [20957]     0 20957     1451      730    49152        0             0 sh
    Jan 31 18:48:47 VAULT13 kernel: [20958]     0 20958    10443     6434   139264        0             0 docker
    Jan 31 18:48:47 VAULT13 kernel: [21003]     0 21003      609      195    45056        0             0 sleep
    Jan 31 18:48:47 VAULT13 kernel: Out of memory: Kill process 28802 (mongod) score 390 or sacrifice child
    Jan 31 18:48:47 VAULT13 kernel: Killed process 28802 (mongod) total-vm:3963028kB, anon-rss:3085440kB, file-rss:0kB, shmem-rss:0kB

    I have a read that Radarr and Sonarr, or at least older versions used mono or mongod and I have seen in an old diagnostic high usage on those lines, but wasn't sure if it was related.

     

    If you need something different or I should post additional information, let me know.

     

    Thanks for the help.

    vault13-diagnostics-20190131-1851.zip

  7. I've read that preclear is pretty much unsupported now and I've also seen discussions that Unraid will 'preclear' a disk in the background when adding it to the array.

     

    With that being said how do I know if it is clearing or if there is errors? I've added the drive to the array and Unraid has formatted it but thats all that seems to have happened? Am I missing something? or is the best option to just run the extended SMART report?

     

    If I'm beating a dead horse let me know.

  8. Sorry for my ignorance but I’ve never dealt with server hardware before so this is a little new to me. 

     

    I have an opportunity to pick up a Supermicro 219-9 case (actually the whole system with a Xeon 5620 - https://www.supermicro.com/products/system/2u/2026/sys-2026t-6rf_.cfm?lan=2 ) for good price, there might be issues with the mobo but I’m more interested in the case. 

     

    Anyways, I’m already running an Asrock H97 pro4 with a G3258 with stockcooler and it’s been great, just a typical Plex server running some other basic dockers. With that being said can I take everything I already have and transplant everything to the Supermicro 219-9 case and use the power supply that’s with the 219-9 case? Any concerns with the backplane?

     

    Thanks for the help. 

     

     

  9. Hello VPN experts.

     

    I'm the middle of upgrading my server and I am seeking some guidance with OpenVPN.

    First off this is new territory for me and I'm completely lost.

     

    I want to have a VPN setup with PIA and would like 99% of my house hold internet traffic to go through it, Unraid Server, Laptops, XBOX etc...

     

    Now I was thinking of running OpenVPN on the Unraid server but thats where I start to get confused, how do I get everything to use the VPN?

     

    So my questions

    [*]How is this wired? Modem-->Router-->Unraid (then somehow everything goes through OpenVPN?) or Modem-->Unraid-->Switch-->Everything else?

    [*]Do I need two NIC cards on the motherboard? I was looking at the ASRock Z97 Extreme6

    [*]Am I better off running a DD-WRT Router instead?

    [*]Is security an issue with the Unraid server directly connected to the modem?

     

    Sorry for all the questions.

     

    Thanks.

  10. Honestly I thought about it, but for $10, I can convert 2x5.25 bays to 2x3.5 bays, versus spending a lot more on one of those caddies, just to get room for one more one additional drive.  Instead (thanks to suggestions on this thread), if/when I need to expand my storage, I plan to buy this:

     

    http://www.caselabs-store.com/hdd-cage-assy-standard/

     

    4 additional drive slots for only $30.  Now that's my cup of tea!

     

    You might find this interesting for additional space, converts 2 x 5.25" into 3 x 3.5" with cooling for cheap - http://www.amazon.com/EverCool-Dual-Drive-Triple-Cooling/dp/B0032UUGF4

     

    Also one other thing that I found out about the R4 Case is that you can double stack the lower hard drive cages from Fractal - https://support.fractal-design.com/support/solutions/articles/190191-will-i-be-able-to-fit-an-extra-hdd-cage-in-my-case-

     

    With that being said this case could turn into a 14 drive system.

  11. Nice build! Just one little question, what 5.25 -> 3.5 drive bay did you use? Just like how clean yours is.

     

    Thanks.

     

    Thanks! For the adapter, here is the part I bought:   http://www.microcenter.com/product/408675/Internal_35_to_525_HDD_Plastic_Mounting_Kit

     

    Thanks for the reply, I just noticed that you just put a single drive in tot eh 5.25 bay. I though you might have used 5.25 to 3.5 caddy that might have given you 3 x 3.5 drive slots.

  12. Config.ini files under SickBeard and CouchPotato? I never edit the config.ini files as all the settings are changeable in the program. I'll check the config.ini files, but that wouldn't explain why both programs write the Metadata in the right places, and not move the actual video file.

     

    Does Unraid check where the files are, specifically when copying from a drive to a share, and see whats the fastest course of action?

  13. I'm running Unraid 4.6-rc3 with Sabnzbd, SickBeard and Couchpotato. I have everything running on disk10.

     

    Now both SB and CP are set to move the files to the appropriate shares once everything has been completed.

    SickBeard - mnt/user/TV

    CouchPotato - mnt/user/Movies

     

    Now under shares in Unraid I have Movies to included disk4, disk5, disk6, disk7, disk8, disk9. While TV includes disk1, disk2, disk3.

     

    After SB finishes it runs SABtoSickbeard.py does it's magic and then copies the TV show to the share, except it always ends up on disk10 in a new TV share directory (mnt/disk10/TV/Show Name/Season/File.mkv) but yet the .nfo and .tbn file are placed in their proper spots (somewhere on disk1, or disk2, or disk3, depending on the show, I have split level set to 1 so it keeps the whole TV show together no matter how many seasons, the split level is also set to 1 for Movies)

     

    Now jump to CP, I have SABnzbd set to download to mnt/disk10/Downloads/Movie Temp where CP watches files and once they are complete does it's magic and moves it to the Movies Share. But just like SB it creates a new Movie share on disk10 (mnt/disk10/Movies/Movie Name/File.mkv) but yet the Metadata (.nfo, .tbn files) are properly copied over to the share somewhere on disk4 thru disk9.

     

    Now I've tried excluding disk10 on the shares, didn't work, the only thing that does is XBMC can't see any of the files on disk10 when looking at TV or Movies. I've cleaned disk10, rebooted, and again it creates new shares.

     

    So I have no idea why it keeps insisting on moving everything to disk10. Do I have the split level set wrong? is some other setting wrong? I've looked in the syslog and it doesn't show any error pertaining to the files being bounced somewhere else.

     

    Thanks for the help.

  14. Okay I followed the instructions....

     

    echo "Installing Avahi dependencies..."
    installpkg /boot/packages/libcap-2.14-i486-1.tgz >null
    installpkg /boot/packages/dbus-1.2.6-i486-1.tgz >null
    installpkg /boot/packages/gcc-4.2.4-i486-1.tgz >null
    installpkg /boot/packages/avahi-0.6.25-i486-62.1.tgz >null
    
    echo "Starting Avahi daemon..."
    cp /boot/configfiles/samba.service /etc/avahi/services/
    /usr/bin/dbus-daemon --system
    /etc/rc.d/rc.avahidaemon restart >null

     

    but after running /usr/bin/dbus-daemon --system it returns the following error Failed to start message bus: Could not get UID and GID for username "messagebus"

  15.  

    installpkg /boot/packages/SABnzbdDependencies-1.2-i486-unRAID.tgz

     

    Well thats the part were I thought I was doing something wrong with the user scripts. When I would have the line as installpkg /boot/SABnzbdDependencies-1.2-i486-unRAID.tgz it would return an error saying that the file doesn't end in .tgz the same with trying the python command because the path wasn't found. So thats why I was going into the directory first then running each command.

  16. Most of the time I have no idea what I'm doing when it comes to the command line but I've stumbled my way around to get things up and running. I setup unMenu and its working and I want to setup a custom user script. I had SABnzbd start from the go file but it stopped working magically one day, no idea why, but if I go to the command line and run everything SABnzbd starts fine. (after reading through a few posts on here I think it might be a timing issue) anyways this is my custom script: (It works, but no idea if it should be like this or not)

     

    #define USER_SCRIPT_LABEL SABnzbd

    #define USER_SCRIPT_DESCR Install dependencies and start SABnzbd

    cd /boot/

    installpkg SABnzbdDependencies-1.2-i486-unRAID.tgz

    cd /boot/custom/usr/share/packages/sabnzbd/

    python SABnzbd.py -d -f /boot/custom/usr/share/packages/sabnzbd/SABnzbd.ini -s 172.16.1.100:8081

     

    Thanks.

  17. Yes.  Reads of a block of a file involve reading one block of physical data from the disk.   

    Each write to the server involves reading the existing contents of the data block being written to, and reading the corresponding parity block, then writing those same two blocks with their updated contents.  4 I/O operations vs. 1.

     

    Now that being said, if its all setup at gigabit speeds what should I expect the write speeds to be going to the unRaid server (A guesstimate)?

  18. Another question regarding transfer speeds.

     

    Now mind you its only connected at 100Mbps....

     

    When transferring files to the unRaid server I get on avg 3MB/sec, so a 1GB file takes 10 minutes. So out of curiosity I pulled the exact same file from the unRaid server pack to the computer, it got on avg 10MB/sec, so that same file took 3 minutes.

     

    Is this normal? is the difference in transfer times due to the unRaid server creating parity while the file is getting copied?