Jump to content

matt_erbeck

Members
  • Posts

    13
  • Joined

  • Last visited

Posts posted by matt_erbeck

  1. 5 hours ago, JorgeB said:

    rootfs is full, this will cause all sorts of problem since the OS needs this to run, check all you makings, anything writing to anywhere other than /boot or /mnt/user (or disk paths) will be writing to RAM.

     

    @JorgeB

     

    Starting my server in safe mode and starting the array with all containers off, here are my current mappings.

    I don't see anything that is standing out as mis-mapped though?

    How can I drill down to the files taking up the space under rootfs to try and clear them out?

     

    image.thumb.png.98f26e8786d7c57f20783990427fd309.png

     

    Edit:

    I found that the ram from my other PC works with my server and I was able to bump up from 4 GB to 8 GB, allowing me to boot the server normally and start the array and plugins etc.

    How can I drill down to what's filled up my rootfs though to prevent it from growing as its sitting at 50% right now with 8 GB ram.

    root@JARVIS:~# df -h
    Filesystem      Size  Used Avail Use% Mounted on
    rootfs          3.7G  1.9G  1.9G  50% /
    tmpfs            32M  720K   32M   3% /run
    /dev/sda1        15G  1.1G   14G   7% /boot
    overlay         3.7G  1.9G  1.9G  50% /lib/firmware
    overlay         3.7G  1.9G  1.9G  50% /lib/modules
    devtmpfs        8.0M     0  8.0M   0% /dev
    tmpfs           3.8G     0  3.8G   0% /dev/shm
    cgroup_root     8.0M     0  8.0M   0% /sys/fs/cgroup
    tmpfs           128M  280K  128M   1% /var/log
    tmpfs           1.0M     0  1.0M   0% /mnt/disks
    tmpfs           1.0M     0  1.0M   0% /mnt/remotes
    tmpfs           1.0M     0  1.0M   0% /mnt/rootshare
    /dev/md1        7.3T  6.8T  561G  93% /mnt/disk1
    /dev/md2         11T  8.3T  2.7T  77% /mnt/disk2
    /dev/md3        7.3T  3.5T  3.8T  48% /mnt/disk3
    /dev/sdg1       233G   77G  156G  33% /mnt/cache
    /dev/sdf1       112G  3.7M  111G   1% /mnt/download_cache
    shfs             26T   19T  7.0T  73% /mnt/user0
    shfs             26T   19T  7.0T  73% /mnt/user
    /dev/loop2       30G  9.8G   19G  35% /var/lib/docker
    /dev/loop3      1.0G  4.3M  905M   1% /etc/libvirt
    tmpfs           772M     0  772M   0% /run/user/0
    root@JARVIS:~#
    

     

  2. 14 hours ago, JorgeB said:

    Please post output of

    df -h

     

    From root or a specific directory?

     

    Edit:

    @JorgeB

    I realized my n54L has an esata port, so was able to get all of my disks connected again, and can verify all are imported in /var/log/syslog.

     

    I realize the above command is for the whole system, not a specific dir. So here is the output:

    root@JARVIS:/var/log# df -h
    Filesystem      Size  Used Avail Use% Mounted on
    rootfs          1.7G  1.7G  328K 100% /
    tmpfs            32M  340K   32M   2% /run
    /dev/sda1        15G  1.1G   14G   7% /boot
    overlay         1.7G  1.7G  328K 100% /lib/firmware
    overlay         1.7G  1.7G  328K 100% /lib/modules
    devtmpfs        8.0M     0  8.0M   0% /dev
    tmpfs           1.9G     0  1.9G   0% /dev/shm
    cgroup_root     8.0M     0  8.0M   0% /sys/fs/cgroup
    tmpfs           128M  252K  128M   1% /var/log
    tmpfs           1.0M     0  1.0M   0% /mnt/disks
    tmpfs           1.0M     0  1.0M   0% /mnt/remotes
    tmpfs           1.0M     0  1.0M   0% /mnt/rootshare
    tmpfs           369M     0  369M   0% /run/user/0
    

     

     

    So it looks like my rootfs is 100% full.

    From my crash issue where my old server no longer powers on, I just moved all my drives to the new system, plugged the usb into an internal header (N54L is my older server so I know this config works with unraid and this USB drive), and try to power it on.

    With standard boot, I get the above issues and no webui.

     

    If I boot in safe mode (no plugins), I am able to boot without issues and get the following as my output of df -h

    root@JARVIS:/var/log# df -h
    Filesystem      Size  Used Avail Use% Mounted on
    rootfs          1.7G  1.7G   44M  98% /
    tmpfs            32M  408K   32M   2% /run
    /dev/sda1        15G  1.1G   14G   7% /boot
    overlay         1.7G  1.7G   44M  98% /lib/firmware
    overlay         1.7G  1.7G   44M  98% /lib/modules
    devtmpfs        8.0M     0  8.0M   0% /dev
    tmpfs           1.9G     0  1.9G   0% /dev/shm
    cgroup_root     8.0M     0  8.0M   0% /sys/fs/cgroup
    tmpfs           128M  200K  128M   1% /var/log
    tmpfs           369M     0  369M   0% /run/user/0
    /dev/md1        7.3T  6.8T  561G  93% /mnt/disk1
    /dev/md2         11T  8.3T  2.7T  77% /mnt/disk2
    /dev/md3        7.3T  3.5T  3.9T  48% /mnt/disk3
    /dev/sdg1       233G   75G  158G  33% /mnt/cache
    /dev/sdf1       112G   35G   76G  32% /mnt/download_cache
    shfs             26T   19T  7.0T  73% /mnt/user0
    shfs             26T   19T  7.0T  73% /mnt/user
    /dev/loop2       30G  9.8G   19G  35% /var/lib/docker
    

     

    I am also able to validate my array with a proper configuration and start the array this way (no plugins).

  3. @JorgeB

     

    I ran the diagnostics command from the ssh cli, and get this as the output.

    root@JARVIS:~# diagnostics
    Starting diagnostics collection...
    Warning: file_put_contents(): Only -1 of 8 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 314
    tail: write error: No space left on device
    
    Warning: file_put_contents(): Only -1 of 302 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 323
    
    Warning: file_put_contents(): Only -1 of 12 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 323
    
    Warning: file_put_contents(): Only -1 of 9043 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 323
    
    Warning: file_put_contents(): Only -1 of 239 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 323
    
    Warning: file_put_contents(): Only -1 of 196 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 323
    
    Warning: file_put_contents(): Only -1 of 969 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 323
    
    Warning: file_put_contents(): Only -1 of 13 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 323
    
    Warning: file_put_contents(): Only -1 of 6469 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 323
    
    Warning: file_put_contents(): Only -1 of 3277 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 323
    
    Warning: file_put_contents(): Only -1 of 14 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 323
    
    Warning: file_put_contents(): Only -1 of 11 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 323
    
    Warning: file_put_contents(): Only -1 of 148 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 340
    
    Warning: file_put_contents(): Only -1 of 2 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 361
    echo: write error: No space left on device
    echo: write error: No space left on device
    echo: write error: No space left on device
    echo: write error: No space left on device
    echo: write error: No space left on device
    echo: write error: No space left on device
    echo: write error: No space left on device
    echo: write error: No space left on device
    echo: write error: No space left on device
    echo: write error: No space left on device
    
    Warning: file_put_contents(): Only -1 of 29 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 414
    
    Warning: file_put_contents(): Only -1 of 29 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 414
    
    Warning: file_put_contents(): Only -1 of 29 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 414
    
    Warning: file_put_contents(): Only -1 of 29 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 414
    
    Warning: file_put_contents(): Only -1 of 29 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 414
    
    Warning: file_put_contents(): Only -1 of 29 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 414
    
    Warning: file_put_contents(): Only -1 of 29 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 414
    
    Warning: file_put_contents(): Only -1 of 29 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 414
    
    Warning: file_put_contents(): Only -1 of 29 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 414
    
    Warning: file_put_contents(): Only -1 of 29 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 414
    
    Warning: file_put_contents(): Only -1 of 29 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 414
    
    Warning: file_put_contents(): Only -1 of 29 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 414
    
    Warning: file_put_contents(): Only -1 of 29 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 414
    
    Warning: file_put_contents(): Only -1 of 29 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 414
    
    Warning: file_put_contents(): Only -1 of 29 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 414
    
    Warning: file_put_contents(): Only -1 of 29 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 414
    
    Warning: file_put_contents(): Only -1 of 1086 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 510
    done.
    ZIP file '/boot/logs/tower-diagnostics-20221112-0801.zip' created.
    root@JARVIS:~#

     

    Zip File:

     

     

    tower-diagnostics-20221112-0801.zip

  4. Hi All,

    Earlier today, I had an issue where my UPS had an issue and lost power (cyberpower 1500va e21 status) and my server shut down. I was able to clear and correct the UPS, but the server would not turn on. I can't definitively say, but pretty sure the server hardware might be shot (burnt smell 😢 ). 

     

    Anyway, I had my old HP N54L microserver hardware that I thought I would throw my drives into to try and bring my array back up.

    My dead server hardware had space for 6 drives (4 array drives and 2 cache drives)

     

    The N54l only has space for 5 drives. 4 array drives and 1 cache (the one that contains my appdata etc). The other cache drive was just a download cache.

     

    I am able to connect to the server via SSH, but not the webui.

     

    ***Edit for some additional troubleshooting.***

    • I edited the config/disk.cfg and set startArray="no"
    • I was able to boot the system, after changing the disk.cf, and with plugins disabled via gui and finally access the webui.
      • This showed my drives and my missing cache drive as well
    • I tried to start restart the server again, this time with plugins enabled but startArray="no" still, and got the same log errors as below.
      • Leading me to believe "/etc/rc.d/rc.unraid-api install" is a plugin trying to write somewhere it shouldn't, and this not allowing me to access the webui.
      • Could my missing cache drive be causing this, or is this something else funky going on because of the hardware swap/configuration change? 

     

     

    When I check the /var/logs/syslog I see the following issues. Any help in attempting to solve would be greatly appreciated.

    More above this, but I don't see any errors really. Some details are censored out
    
    Nov 11 22:07:13 JARVIS  emhttpd: shcmd (5): modprobe md-mod super=/boot/config/super.dat
    Nov 11 22:07:13 JARVIS kernel: md: unRAID driver 2.9.24 installed
    Nov 11 22:07:13 JARVIS  emhttpd: Plus key detected, GUID: 0781-5571-0201-420420112195 FILE: /boot/config/Plus.key
    Nov 11 22:07:13 JARVIS  emhttpd: Device inventory:
    Nov 11 22:07:13 JARVIS  emhttpd: WDC_WD80EFAX-******** (sdd) 512 15628053168
    Nov 11 22:07:13 JARVIS  emhttpd: WDC_WD80EFAX-******** (sde) 512 15628053168
    Nov 11 22:07:13 JARVIS  emhttpd: WDC_WD120EFBX-******** (sdb) 512 23437770752
    Nov 11 22:07:13 JARVIS  emhttpd: Samsung_SSD_840_Series_******** (sdf) 512 488397168
    Nov 11 22:07:13 JARVIS  emhttpd: WDC_WD120EFBX-******** (sdc) 512 23437770752
    Nov 11 22:07:13 JARVIS  emhttpd: SanDisk_Cruzer_Fit_4C5******** (sda) 512 30529536
    Nov 11 22:07:13 JARVIS kernel: mdcmd (1): import 0 sdb 64 11718885324 0 WDC_WD120EFBX-********
    Nov 11 22:07:13 JARVIS kernel: md: import disk0: (sdb) WDC_WD120EFBX-******** size: 11718885324
    Nov 11 22:07:13 JARVIS kernel: mdcmd (2): import 1 sdd 64 7814026532 0 WDC_WD80EFAX-********
    Nov 11 22:07:13 JARVIS kernel: md: import disk1: (sdd) WDC_WD80EFAX-******** size: 7814026532
    Nov 11 22:07:13 JARVIS kernel: mdcmd (3): import 2 sdc 64 11718885324 0 WDC_WD120EFBX-********
    Nov 11 22:07:13 JARVIS kernel: md: import disk2: (sdc) WDC_WD120EFBX-******** size: 11718885324
    Nov 11 22:07:13 JARVIS kernel: mdcmd (4): import 3 sde 64 7814026532 0 WDC_WD80EFAX-********
    Nov 11 22:07:13 JARVIS kernel: md: import disk3: (sde) WDC_WD80EFAX-******** size: 7814026532
    Nov 11 22:07:13 JARVIS kernel: mdcmd (5): import 4
    Nov 11 22:07:13 JARVIS kernel: mdcmd (6): import 5
    Nov 11 22:07:13 JARVIS kernel: mdcmd (7): import 6
    Nov 11 22:07:13 JARVIS kernel: mdcmd (8): import 7
    Nov 11 22:07:13 JARVIS kernel: mdcmd (9): import 8
    Nov 11 22:07:13 JARVIS kernel: mdcmd (10): import 9
    Nov 11 22:07:13 JARVIS kernel: mdcmd (11): import 10
    Nov 11 22:07:13 JARVIS kernel: mdcmd (12): import 11
    Nov 11 22:07:13 JARVIS kernel: mdcmd (13): import 12
    Nov 11 22:07:13 JARVIS kernel: mdcmd (14): import 13
    Nov 11 22:07:13 JARVIS kernel: mdcmd (15): import 14
    Nov 11 22:07:13 JARVIS kernel: mdcmd (16): import 15
    Nov 11 22:07:13 JARVIS kernel: mdcmd (17): import 16
    Nov 11 22:07:13 JARVIS kernel: mdcmd (18): import 17
    Nov 11 22:07:13 JARVIS kernel: mdcmd (19): import 18
    Nov 11 22:07:13 JARVIS kernel: mdcmd (20): import 19
    Nov 11 22:07:13 JARVIS kernel: mdcmd (21): import 20
    Nov 11 22:07:13 JARVIS kernel: mdcmd (22): import 21
    Nov 11 22:07:13 JARVIS kernel: mdcmd (23): import 22
    Nov 11 22:07:13 JARVIS kernel: mdcmd (24): import 23
    Nov 11 22:07:13 JARVIS kernel: mdcmd (25): import 24
    Nov 11 22:07:13 JARVIS kernel: mdcmd (26): import 25
    Nov 11 22:07:13 JARVIS kernel: mdcmd (27): import 26
    Nov 11 22:07:13 JARVIS kernel: mdcmd (28): import 27
    Nov 11 22:07:13 JARVIS kernel: mdcmd (29): import 28
    Nov 11 22:07:13 JARVIS kernel: mdcmd (30): import 29
    Nov 11 22:07:13 JARVIS kernel: md: import_slot: 29 empty
    Nov 11 22:07:13 JARVIS  emhttpd: import 30 cache device: (sdf) Samsung_SSD_840_Series_********
    Nov 11 22:07:13 JARVIS  emhttpd: import 31 cache device: no device
    Nov 11 22:07:13 JARVIS  emhttpd: import flash device: sda
    Nov 11 22:07:14 JARVIS root: Starting apcupsd power management:  /sbin/apcupsd
    Nov 11 22:07:14 JARVIS  apcupsd[6364]: apcupsd 3.14.14 (31 May 2016) slackware startup succeeded
    Nov 11 22:07:14 JARVIS  emhttpd: read SMART /dev/sdd
    Nov 11 22:07:14 JARVIS  emhttpd: read SMART /dev/sde
    Nov 11 22:07:14 JARVIS  emhttpd: read SMART /dev/sdb
    Nov 11 22:07:14 JARVIS  emhttpd: read SMART /dev/sdf
    Nov 11 22:07:14 JARVIS  emhttpd: read SMART /dev/sdc
    Nov 11 22:07:14 JARVIS  emhttpd: read SMART /dev/sda
    Nov 11 22:07:14 JARVIS  emhttpd: Starting services...
    Nov 11 22:07:14 JARVIS  emhttpd: shcmd (12): /etc/rc.d/rc.samba restart
    Nov 11 22:07:14 JARVIS  winbindd[1134]: [2022/11/11 22:07:14.331309,  0] ../../source3/winbindd/winbindd_dual.c:1957(winbindd_sig_term_handler)
    Nov 11 22:07:14 JARVIS  winbindd[1134]:   Got sig[15] terminate (is_parent=0)
    Nov 11 22:07:14 JARVIS  nmbd[1119]: [2022/11/11 22:07:14.333042,  0] ../../source3/nmbd/nmbd.c:59(terminate)
    Nov 11 22:07:14 JARVIS  nmbd[1119]:   Got SIGTERM: going down...
    Nov 11 22:07:14 JARVIS  wsdd2[1129]: 'Terminated' signal received.
    Nov 11 22:07:14 JARVIS  wsdd2[1129]: terminating.
    Nov 11 22:07:14 JARVIS  winbindd[1132]: [2022/11/11 22:07:14.335625,  0] ../../source3/winbindd/winbindd_dual.c:1957(winbindd_sig_term_handler)
    Nov 11 22:07:14 JARVIS  winbindd[1132]:   Got sig[15] terminate (is_parent=1)
    Nov 11 22:07:14 JARVIS  winbindd[2091]: [2022/11/11 22:07:14.340712,  0] ../../source3/winbindd/winbindd_dual.c:1957(winbindd_sig_term_handler)
    Nov 11 22:07:14 JARVIS  winbindd[2091]:   Got sig[15] terminate (is_parent=0)
    Nov 11 22:07:16 JARVIS root: Starting Samba:  /usr/sbin/smbd -D
    Nov 11 22:07:16 JARVIS  smbd[6404]: [2022/11/11 22:07:16.568341,  0] ../../source3/smbd/server.c:1741(main)
    Nov 11 22:07:16 JARVIS  smbd[6404]:   smbd version 4.17.0 started.
    Nov 11 22:07:16 JARVIS  smbd[6404]:   Copyright Andrew Tridgell and the Samba Team 1992-2022
    Nov 11 22:07:16 JARVIS root:                  /usr/sbin/nmbd -D
    Nov 11 22:07:16 JARVIS  nmbd[6406]: [2022/11/11 22:07:16.613974,  0] ../../source3/nmbd/nmbd.c:901(main)
    Nov 11 22:07:16 JARVIS  nmbd[6406]:   nmbd version 4.17.0 started.
    Nov 11 22:07:16 JARVIS  nmbd[6406]:   Copyright Andrew Tridgell and the Samba Team 1992-2022
    Nov 11 22:07:16 JARVIS root:                  /usr/sbin/wsdd2 -d
    Nov 11 22:07:16 JARVIS  wsdd2[6420]: starting.
    Nov 11 22:07:16 JARVIS root:                  /usr/sbin/winbindd -D
    Nov 11 22:07:16 JARVIS  winbindd[6421]: [2022/11/11 22:07:16.819763,  0] ../../source3/winbindd/winbindd.c:1440(main)
    Nov 11 22:07:16 JARVIS  winbindd[6421]:   winbindd version 4.17.0 started.
    Nov 11 22:07:16 JARVIS  winbindd[6421]:   Copyright Andrew Tridgell and the Samba Team 1992-2022
    Nov 11 22:07:16 JARVIS  winbindd[6423]: [2022/11/11 22:07:16.828798,  0] ../../source3/winbindd/winbindd_cache.c:3116(initialize_winbindd_cache)
    Nov 11 22:07:16 JARVIS  winbindd[6423]:   initialize_winbindd_cache: clearing cache and re-creating with version number 2
    Nov 11 22:07:16 JARVIS  emhttpd: shcmd (16): /etc/rc.d/rc.avahidaemon start
    Nov 11 22:07:16 JARVIS root: Starting Avahi mDNS/DNS-SD Daemon:  /usr/sbin/avahi-daemon -D
    Nov 11 22:07:16 JARVIS  avahi-daemon[6440]: Found user 'avahi' (UID 61) and group 'avahi' (GID 214).
    Nov 11 22:07:16 JARVIS  avahi-daemon[6440]: Successfully dropped root privileges.
    Nov 11 22:07:16 JARVIS  avahi-daemon[6440]: avahi-daemon 0.8 starting up.
    Nov 11 22:07:16 JARVIS  avahi-daemon[6440]: Successfully called chroot().
    Nov 11 22:07:16 JARVIS  avahi-daemon[6440]: Successfully dropped remaining capabilities.
    Nov 11 22:07:16 JARVIS  avahi-daemon[6440]: Loading service file /services/sftp-ssh.service.
    Nov 11 22:07:16 JARVIS  avahi-daemon[6440]: Loading service file /services/smb.service.
    Nov 11 22:07:16 JARVIS  avahi-daemon[6440]: Loading service file /services/ssh.service.
    Nov 11 22:07:16 JARVIS  avahi-daemon[6440]: Joining mDNS multicast group on interface br0.IPv4 with address 192.168.1.88.
    Nov 11 22:07:16 JARVIS  avahi-daemon[6440]: New relevant interface br0.IPv4 for mDNS.
    Nov 11 22:07:16 JARVIS  avahi-daemon[6440]: Joining mDNS multicast group on interface lo.IPv6 with address ::1.
    Nov 11 22:07:16 JARVIS  avahi-daemon[6440]: New relevant interface lo.IPv6 for mDNS.
    Nov 11 22:07:16 JARVIS  avahi-daemon[6440]: Joining mDNS multicast group on interface lo.IPv4 with address 127.0.0.1.
    Nov 11 22:07:16 JARVIS  avahi-daemon[6440]: New relevant interface lo.IPv4 for mDNS.
    Nov 11 22:07:16 JARVIS  avahi-daemon[6440]: Network interface enumeration completed.
    Nov 11 22:07:16 JARVIS  avahi-daemon[6440]: Registering new address record for 192.168.1.88 on br0.IPv4.
    Nov 11 22:07:16 JARVIS  avahi-daemon[6440]: Registering new address record for ::1 on lo.*.
    Nov 11 22:07:16 JARVIS  avahi-daemon[6440]: Registering new address record for 127.0.0.1 on lo.IPv4.
    Nov 11 22:07:16 JARVIS  emhttpd: shcmd (17): /etc/rc.d/rc.avahidnsconfd start
    Nov 11 22:07:16 JARVIS root: Starting Avahi mDNS/DNS-SD DNS Server Configuration Daemon:  /usr/sbin/avahi-dnsconfd -D
    Nov 11 22:07:16 JARVIS  avahi-dnsconfd[6449]: Successfully connected to Avahi daemon.
    Nov 11 22:07:17 JARVIS  emhttpd: Autostart disabled (device configuration change)
    Nov 11 22:07:17 JARVIS  emhttpd: shcmd (22): /etc/rc.d/rc.php-fpm start
    Nov 11 22:07:17 JARVIS root: Starting php-fpm  done
    Nov 11 22:07:17 JARVIS  emhttpd: shcmd (23): /etc/rc.d/rc.unraid-api install
    Nov 11 22:07:17 JARVIS root: tar: unraid-api: Wrote only 512 of 10240 bytes
    Nov 11 22:07:17 JARVIS  avahi-daemon[6440]: Server startup complete. Host name is JARVIS.local. Local service cookie is 2188998456.
    Nov 11 22:07:18 JARVIS root: tar: package.json: Cannot write: No space left on device
    Nov 11 22:07:18 JARVIS root: tar: README.md: Cannot write: No space left on device
    Nov 11 22:07:18 JARVIS root: tar: .env.production: Cannot write: No space left on device
    Nov 11 22:07:18 JARVIS root: tar: .env.staging: Cannot write: No space left on device
    Nov 11 22:07:18 JARVIS root: tar: Exiting with failure status due to previous errors
    Nov 11 22:07:18 JARVIS root: cp: error writing '/usr/local/emhttp/webGui/webComps/unraid.min.js': No space left on device
    Nov 11 22:07:18 JARVIS  avahi-daemon[6440]: Service "JARVIS" (/services/ssh.service) successfully established.
    Nov 11 22:07:18 JARVIS  avahi-daemon[6440]: Service "JARVIS" (/services/smb.service) successfully established.
    Nov 11 22:07:18 JARVIS  avahi-daemon[6440]: Service "JARVIS" (/services/sftp-ssh.service) successfully established.
    Nov 11 22:07:20 JARVIS root: /etc/rc.d/rc.unraid-api: line 46:  6486 Segmentation fault      LOG_TYPE=raw "${api_base_directory}/unraid-api/unraid-api" status
    Nov 11 22:07:20 JARVIS  emhttpd: shcmd (24): /etc/rc.d/rc.nginx start
    Nov 11 22:07:20 JARVIS root: cat: write error: No space left on device
    Nov 11 22:07:20 JARVIS root: cat: write error: No space left on device
    Nov 11 22:07:20 JARVIS root: cat: write error: No space left on device
    Nov 11 22:07:20 JARVIS root: cat: write error: No space left on device
    Nov 11 22:07:20 JARVIS root: cat: write error: No space left on device
    Nov 11 22:07:20 JARVIS root: /etc/rc.d/rc.nginx: line 460: echo: write error: No space left on device
    Nov 11 22:07:20 JARVIS root: /etc/rc.d/rc.nginx: line 461: echo: write error: No space left on device
    Nov 11 22:07:20 JARVIS root: /etc/rc.d/rc.nginx: line 462: echo: write error: No space left on device
    Nov 11 22:07:20 JARVIS root: /etc/rc.d/rc.nginx: line 463: echo: write error: No space left on device
    Nov 11 22:07:20 JARVIS root: /etc/rc.d/rc.nginx: line 464: echo: write error: No space left on device
    Nov 11 22:07:20 JARVIS root: /etc/rc.d/rc.nginx: line 465: echo: write error: No space left on device
    Nov 11 22:07:20 JARVIS root: /etc/rc.d/rc.nginx: line 466: echo: write error: No space left on device
    Nov 11 22:07:20 JARVIS root: /etc/rc.d/rc.nginx: line 467: echo: write error: No space left on device
    Nov 11 22:07:20 JARVIS root: /etc/rc.d/rc.nginx: line 468: echo: write error: No space left on device
    Nov 11 22:07:20 JARVIS root: /etc/rc.d/rc.nginx: line 470: echo: write error: No space left on device
    Nov 11 22:07:20 JARVIS root: /etc/rc.d/rc.nginx: line 471: echo: write error: No space left on device
    Nov 11 22:07:20 JARVIS root: /etc/rc.d/rc.nginx: line 472: echo: write error: No space left on device
    Nov 11 22:07:20 JARVIS root: /etc/rc.d/rc.nginx: line 474: echo: write error: No space left on device
    Nov 11 22:07:20 JARVIS root: /etc/rc.d/rc.nginx: line 475: echo: write error: No space left on device
    Nov 11 22:07:20 JARVIS root: /etc/rc.d/rc.nginx: line 476: echo: write error: No space left on device
    Nov 11 22:07:20 JARVIS root: /etc/rc.d/rc.nginx: line 477: echo: write error: No space left on device
    Nov 11 22:07:20 JARVIS root: /etc/rc.d/rc.nginx: line 478: echo: write error: No space left on device
    Nov 11 22:07:20 JARVIS root: Starting Nginx server daemon...
    Nov 11 22:07:20 JARVIS  winbindd[6521]: [2022/11/11 22:07:20.425026,  0] ../../source3/lib/util.c:491(reinit_after_fork)
    Nov 11 22:07:20 JARVIS  winbindd[6521]:   messaging_reinit() failed: NT_STATUS_DISK_FULL
    Nov 11 22:07:20 JARVIS  winbindd[6521]: [2022/11/11 22:07:20.425516,  0] ../../source3/winbindd/winbindd_dual.c:1534(winbindd_reinit_after_fork)
    Nov 11 22:07:20 JARVIS  winbindd[6521]:   reinit_after_fork() failed
    Nov 11 22:07:20 JARVIS  emhttpd: shcmd (25): /etc/rc.d/rc.flash_backup start
    Nov 11 22:07:24 JARVIS  apcupsd[6364]: NIS server startup succeeded
    Nov 11 22:07:39 JARVIS  nmbd[6410]: [2022/11/11 22:07:39.654233,  0] ../../source3/nmbd/nmbd_become_lmb.c:398(become_local_master_stage2)
    Nov 11 22:07:39 JARVIS  nmbd[6410]:   *****
    Nov 11 22:07:39 JARVIS  nmbd[6410]:
    Nov 11 22:07:39 JARVIS  nmbd[6410]:   Samba name server JARVIS is now a local master browser for workgroup APERATURE_LABS on subnet 192.168.1.88
    Nov 11 22:07:39 JARVIS  nmbd[6410]:
    Nov 11 22:07:39 JARVIS  nmbd[6410]:   *****
    

     

  5. Edit: So I did a little tweaking, and the main issue is where I set $upstream_app.

     

    In the conf below, I try setting it to openeats.

    If I set it to my server IP, works perfectly fine.

    Is this a docker issue or a reverse proxy issue?

    All my other confs work with using the docker name as the upstream_app.
    I also have ensured that my "ALLOWED_HOST:" is set to * currently. So that shouldn't be limiting it.

     

     

     

     

    Original Problem:

     

    I've got an interest issue with the reverse proxy setup.


    Using the subdomain conf in the first post, I can connect perfectly fine.

    But I tried to update the conf to use proxy_pass, as I do with the rest of my confs through swag/letsencrypt, and now it just says "Loading" with the dots on open eats. I have updated open eats network to be "proxynet" which is used for all my containers in the reverse proxy.

     

    Quote

     

    Here is my proxy conf:

    
    server {
        listen 443 ssl;
        listen [::]:443 ssl;
    
        server_name openeats.*;
    
        include /config/nginx/ssl.conf;
    
        client_max_body_size 0;
    
        # enable for ldap auth, fill in ldap details in ldap.conf
        #include /config/nginx/ldap.conf;
    
        # enable for Authelia
        #include /config/nginx/authelia-server.conf;
    
        location / {
            # enable the next two lines for http auth
            #auth_basic "Restricted";
            #auth_basic_user_file /config/nginx/.htpasswd;
    
            # enable the next two lines for ldap auth
            #auth_request /auth;
            #error_page 401 =200 /ldaplogin;
    
            # enable for Authelia
            #include /config/nginx/authelia-location.conf;
    
            include /config/nginx/proxy.conf;
            resolver 127.0.0.11 valid=30s;
            set $upstream_app openeats;
            set $upstream_port 8760;
            set $upstream_proto http;
            proxy_pass $upstream_proto://$upstream_app:$upstream_port;
    
        }
    }


     

     

  6. 5 minutes ago, CorneliousJD said:

     

    Not exactly, favorites isn't a feature of OpenEats, but ratings are, you can rate your recipes and give them 0-5 stars, and you can sort by rating, and if you give your favorites 5 stars then you effectively can bookmark a page for your favorites 

     

    https://openeats.domain.com/browse/?rating=5

    Alright thanks. I guess if I am creating all the recipes, I can also edit them and add a 'Favorite' tag, and just search by that.

  7. On 1/28/2021 at 12:05 AM, skois said:

     
    I was able to revert my docker image version by changing the repository under the container settings to linuxserver/nextcloud:20.0.2-ls107 from linuxserver/nextcloud.
     
    This version allowed me to start NC.
     
    Now going to Settings > Overview > Version, I can see I am running NC version 16.0.1.
     
    So what is the proper way to update in order to use latest docker image?
    Update nextcloud until you get to version 19
    Change docker repo to the latest again
    Update the nextcloud to v20

    After that keep in mind you need to first update image and then nextcloud


    Sent from my Mi 10 Pro using Tapatalk
     

    Thank! was able to update everything easily and successfully.

  8. So like many others, I got the below error.

     

    Quote

    This version of Nextcloud is not compatible with > PHP 7.3. You are currently running 7.4.14.

     

    I was able to revert my docker image version by changing the repository under the container settings to linuxserver/nextcloud:20.0.2-ls107 from linuxserver/nextcloud.

     

    This version allowed me to start NC.

     

    Now going to Settings > Overview > Version, I can see I am running NC version 16.0.1.

     

    So what is the proper way to update in order to use latest docker image?

  9. On 12/22/2019 at 9:26 PM, Djoss said:

    Can you try to add "--def" for the gmail setting also?  If that doesn't help, is using the "--def discord=[...]" argument alone (without gmail) change something?

    @Djoss No change in behavior for discord. Still not getting any publish. The logs are showing the full discord webhook under the parameter section. 

  10. Hi @Djoss,

    Appreciate your support on helping to solve everones problems!

    I am also having issues with the discord webhook publish, similar to @InfInIty.

     

    I have the "Automated Media Center: Custom Options:" in  the container setup properly, I think.

    There are no special characters in my webhook except underscores.

    I also on the same line have gmail defined.

     

    --def discord=https://discordapp.com/api/webhooks/656.... gmail=e.... 

    [redacted for privacy]

     

    In my logs I can see it picks up the parameters of discord and gmail.

    [amc] Parameter: discord = https://discordapp.com/api/webhooks/ 656....   [redacted for privacy, but shows full webhook path]
    [amc] Parameter: gmail = *****   (it appears with **** in the logs)

     

     

    Renaming processes and the logs show:

    [amc] Processed 2 files
    [amc] [mail] Sending email: [FileBot] ....  [redacted for privacy]
    [amc] [mail] Sending email: [FileBot] ....  [redacted for privacy]
    [amc] [mail] Sent email with 0 attachments

     

     

    I get the gmail properly, but not the discord at all.

    Any ideas?

     

×
×
  • Create New...