cmannes

Members
  • Posts

    7
  • Joined

  • Last visited

Posts posted by cmannes

  1. Read a lot about how to handle btrfs errors.  And btrfs check --repair docker.img appears to have done the magic.  My dockers all started correctly now, and appear to be working as expected.

     

    I'm going to assume user error.  I probably should have stopped all my dockers before running the 6.9 update.  (Which of course occurred to me shortly after clicking the button.)

  2. Did the 6.9 update.  Started my shares.  They seem fine.

     

    Docker won't start though, and in the System Log I see..

     

    Mar  2 18:06:30 Cube emhttpd: shcmd (323): /usr/local/sbin/mount_image '/mnt/user/system/docker/docker.img' /var/lib/docker 30
    Mar  2 18:06:32 Cube kernel: BTRFS: device fsid 405485a6-6ae3-4031-affd-458b900c4624 devid 1 transid 1037304 /dev/loop2 scanned by mount (15723)
    Mar  2 18:06:32 Cube kernel: BTRFS info (device loop2): enabling free space tree
    Mar  2 18:06:32 Cube kernel: BTRFS info (device loop2): using free space tree
    Mar  2 18:06:32 Cube kernel: BTRFS info (device loop2): has skinny extents
    Mar  2 18:06:32 Cube kernel: BTRFS info (device loop2): bdev /dev/loop2 errs: wr 91, rd 0, flush 0, corrupt 0, gen 0
    Mar  2 18:06:33 Cube kernel: BTRFS info (device loop2): enabling ssd optimizations
    Mar  2 18:06:33 Cube kernel: BTRFS info (device loop2): creating free space tree
    Mar  2 18:06:33 Cube kernel: BTRFS critical (device loop2): corrupt leaf: block=11351392256 slot=16 extent bytenr=492818432 len=4096 invalid data ref offset, have 4294936711 expect aligned to 4096
    Mar  2 18:06:33 Cube kernel: BTRFS error (device loop2): block=11351392256 read time tree block corruption detected
    Mar  2 18:06:33 Cube kernel: BTRFS critical (device loop2): corrupt leaf: block=11351392256 slot=16 extent bytenr=492818432 len=4096 invalid data ref offset, have 4294936711 expect aligned to 4096
    Mar  2 18:06:33 Cube kernel: BTRFS error (device loop2): block=11351392256 read time tree block corruption detected
    Mar  2 18:06:33 Cube kernel: BTRFS critical (device loop2): corrupt leaf: block=11351392256 slot=16 extent bytenr=492818432 len=4096 invalid data ref offset, have 4294936711 expect aligned to 4096
    Mar  2 18:06:33 Cube kernel: BTRFS error (device loop2): block=11351392256 read time tree block corruption detected
    Mar  2 18:06:33 Cube kernel: BTRFS: error (device loop2) in btrfs_create_free_space_tree:1189: errno=-5 IO failure
    Mar  2 18:06:33 Cube kernel: BTRFS warning (device loop2): failed to create free space tree: -5
    Mar  2 18:06:33 Cube kernel: BTRFS error (device loop2): commit super ret -30
    Mar  2 18:06:33 Cube root: mount: /var/lib/docker: can't read superblock on /dev/loop2.
    Mar  2 18:06:33 Cube kernel: BTRFS error (device loop2): open_ctree failed
    Mar  2 18:06:33 Cube root: mount error
    Mar  2 18:06:33 Cube emhttpd: shcmd (323): exit status: 1
    Mar  2 18:06:33 Cube emhttpd: shcmd (336): /usr/local/sbin/mount_image '/mnt/user/system/libvirt/libvirt.img' /etc/libvirt 1

     

    So it's looking like my docker.img got a wee bit horked.  I'm doing my due-diligence of googling for ideas on how to fix.  

  3. Okay, via process of elimination, it was my windows PC.

     

    Then using a network traffic viewer, it was my browser.  Even though I had no tabs open. 

     

    But...  I had an extension that was apparently "incorrectly" hitting my server.  I disabled the extension, and no more messages in the log.

     

    Thank you for the help!

    • Like 1
  4. My server "cube" was misbehaving this morning.  (UI was unreachable, apps not responding, etc.)  So I power cycled, and shutdown the last few dockers I had been tinkering with.  For the most part, things seemed back to normal, except for one thing in my syslog.

     

    Dec 16 09:10:14 Cube root: error: /jsonrpc: missing csrf_token
    Dec 16 09:10:19 Cube root: error: /jsonrpc: missing csrf_token
    Dec 16 09:10:19 Cube root: error: /jsonrpc: missing csrf_token
    Dec 16 09:10:24 Cube root: error: /jsonrpc: missing csrf_token
    Dec 16 09:10:24 Cube root: error: /jsonrpc: missing csrf_token
    Dec 16 09:10:29 Cube root: error: /jsonrpc: missing csrf_token
    Dec 16 09:10:29 Cube root: error: /jsonrpc: missing csrf_token

    Roughly every 5 seconds I get two missing csrf_token errors.  I saw in the FAQ, that this can be an out of date plugin.  I don't use a whole lot of plugins, but I removed a few I know I wasn't using.  And it's still occuring.  I've shutdown all my VMs, all dockers.

     

    This is my list of plugins

     

    ca.backup2.plg - 2020.10.21
    ca.cfg.editor.plg - 2020.10.21
    ca.cleanup.appdata.plg - 2020.10.21
    ca.turbo.plg - 2020.10.21
    ca.update.applications.plg - 2020.10.21
    community.applications.plg - 2020.12.14a
    dynamix.cache.dirs.plg - 2020.08.03
    dynamix.s3.sleep.plg - 2020.06.21
    dynamix.ssd.trim.plg - 2020.06.21
    dynamix.system.autofan.plg - 2020.06.21
    dynamix.system.buttons.plg - 2020.06.20
    dynamix.system.info.plg - 2020.06.21
    dynamix.system.stats.plg - 2020.06.21
    dynamix.system.temp.plg - 2020.06.20
    fix.common.problems.plg - 2020.12.05
    NerdPack.plg - 2019.12.31
    preclear.disk.plg - 2020.12.13c
    unassigned.devices.plg - 2020.12.13b
    unassigned.devices-plus.plg - 2020.05.22
    unRAIDServer.plg - 6.8.3

    And attached are my diagnostics.  I'm just uncertain how to get from the entry in the syslog to the root cause.  I searched for "/jsonrpc" and got a lot of hits, but nothing that stood out to me as related.

     

    Here's my diagnostics.  Pretty sure in the syslog you can see me removing the unused Plugins, but the error keeps going.

     

    Thanks!

     

    cube-diagnostics-20201216-0906.zip

  5. So I installed Kiwi Syslog Service Manager on my PC.  it's running on 192.168.40.154.  In unraid I set "Local syslog server" "disabled" "remote syslog server" "192.168.40.154".  I hit apply, and nothing ever shows in Kiwi.
     

    I checked the windows firewall, kiwi has full access tcp & udp.

     

    I did a ping both ways.  And unraid & windows can see each other.

     

    I tried

     

    echo -n "test message" | nc -u -w1 192.168.40.154 514

     

    And nothing.

     

    I switched unraid & kiwi to TCP.  Tried

     

    echo -n "test message" | nc -w1 192.168.40.154 514

     

    And nothing.

     

    So, what is the trick to getting unraid to save to a remote syslog?  I searched this forum, and most articles are really old.  I tried reddit, similar issues.  No youtube videos. 

     

     

     

  6. Okay, finally had it crash and kept the syslog. 

    In the log I can see on 2/9 when I restarted the server around 20:40ish.  Then at 01:00 there's a long sequence of

     

    Feb 10 01:00:06 Cube kernel: veth40a15f2: renamed from eth0
    Feb 10 01:00:06 Cube kernel: docker0: port 1(veth5b405b7) entered disabled state
    Feb 10 01:00:06 Cube kernel: docker0: port 1(veth5b405b7) entered disabled state
    Feb 10 01:00:06 Cube kernel: device veth5b405b7 left promiscuous mode
    Feb 10 01:00:06 Cube kernel: docker0: port 1(veth5b405b7) entered disabled state

    I believe that's when my internet went out over night, so the various docker containers were complaining.  Around 1:11 the internet came back, and then there's a gap in the log between 01:11 and 13:19 when it was restarted after it locked up around noon.

     

    Feb 10 01:11:26 Cube kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethb30dfd9: link becomes ready
    Feb 10 01:11:26 Cube kernel: docker0: port 10(vethb30dfd9) entered blocking state
    Feb 10 01:11:26 Cube kernel: docker0: port 10(vethb30dfd9) entered forwarding state
    Feb 10 13:19:16 Cube kernel: microcode: microcode updated early to revision 0x2f, date = 2019-02-17
    Feb 10 13:19:16 Cube kernel: Linux version 4.19.98-Unraid (root@Develop) (gcc version 9.2.0 (GCC)) #1 SMP Sun Jan 26 09:15:03 PST 2020
    Feb 10 13:19:16 Cube kernel: Command line: BOOT_IMAGE=/bzimage initrd=/bzroot
    Feb 10 13:19:16 Cube kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'

    The latest diagnostic is attached.  In the meantime I disabled hardware acceleration in Plex, and am going to try to crash it later once the latest parity check completes.

     

    Thanks,

    cube-diagnostics-20200210-1650.zip

  7. I have a semi-regular occurrence, where streaming a lower-res (<720p) video via binhex-plexpass, causes my entire unraid server (6.8.2, although it occured in prior versions as well) to lock up and become completely unresponsive.  Even a keyboard/mouse/monitor directly connected is unresponsive.  I have to power-cycle the server.  It's always come back fine, and everything works as expected.  Till the next hard-lock.

     

    And while a 'fix' from the community would be nice, what I'd really like to find out, is how to triage the issue.  I'm actually a 'FullStack' developer, so I've got experience in Docker & Linux.  But for some reason the Unraid running from ram bit messes with my head, and I can't figure out where to start. :)

     

    So if anyone can offer a little guidance, I'd like to learn more and dig into what's going on.

     

    Thanks!