Jump to content

AncientVale

Members
  • Posts

    19
  • Joined

  • Last visited

Posts posted by AncientVale

  1. I'm wondering if someone can help me resolve the following error on my nextcloud install on unraid, or at least point me in the right direction?

     

    Quote

    "Error: Undefined class constant 'CACHE_TTL' /config/www/nextcloud/lib/private/BackgroundJob/Job.php - line 62: OCA\Bookmarks\BackgroundJobs\PreviewsJob->run(null) /config/www/nextcloud/lib/private/BackgroundJob/TimedJob.php - line 57: OC\BackgroundJob\Job->execute(OC\BackgroundJob\JobList {}, OC\Log {}) /config/www/nextcloud/cron.php - line 126: OC\BackgroundJob\TimedJob->execute(OC\BackgroundJob\JobList {}, OC\Log {})"

     

  2. So I lost power at my house last night and my Unraid server was down for the evening. Everything is back up and running now but I have an error showing on my server of 

    Quote

    Last job execution ran yesterday. Something seems wrong.

    Can someone tell me what command I would use to get those to run today? Or should I just wait and let them run at their normally scheduled times?

  3. I did what you told me and it was fine for a little over a month. now I am getting the below in the log:

     

    Quote

    Jul 3 22:22:43 AncientValeMkII kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 85, rd 0, flush 0, corrupt 0, gen 0
    Jul 3 22:22:43 AncientValeMkII kernel: loop: Write error at byte offset 12355469312, length 4096.
    Jul 3 22:22:43 AncientValeMkII kernel: print_req_error: I/O error, dev loop2, sector 24131776
    Jul 3 22:22:43 AncientValeMkII kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 86, rd 0, flush 0, corrupt 0, gen 0
    Jul 3 22:22:49 AncientValeMkII kernel: loop: Write error at byte offset 12193988608, length 4096.
    Jul 3 22:22:49 AncientValeMkII kernel: print_req_error: I/O error, dev loop2, sector 23816384
    Jul 3 22:22:49 AncientValeMkII kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 87, rd 0, flush 0, corrupt 0, gen 0
    Jul 3 22:22:49 AncientValeMkII kernel: loop: Write error at byte offset 12278943744, length 4096.
    Jul 3 22:22:49 AncientValeMkII kernel: print_req_error: I/O error, dev loop2, sector 23982312
    Jul 3 22:22:49 AncientValeMkII kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 88, rd 0, flush 0, corrupt 0, gen 0

     

    Should I do that again? And does this mean my Cache drive is dying?

  4. So I've had basically all of these issues at one point or another in my Fix Common Problems extended tests.

     

    • For the first problem of the shares with different cases, you should check all of your docker container settings and see if you have a differing case there. You will likely need to use dolphin or krusader to move the files to the proper share before deleting the one with the wrong case
    • For the duplicated file, go into the user share and delete the duplicate file. 
    • For the final issue, run the docker safe new permissions process in tools.
    • Thanks 1
  5. Hi guys,

     

    I was hoping someone could take a look at this call trace and let me know what they think it may be? I am not nearly as versed in this kind of stuff and it looks like this is a very specific error. I have ECC Memory and am running a Gigabyte GA-7TESM motherboard.

     

    Quote

    Jun 25 08:58:03 Tower kernel: Call Trace:
    Jun 25 08:58:03 Tower kernel: <IRQ>
    Jun 25 08:58:03 Tower kernel: ipv4_confirm+0xaf/0xb7
    Jun 25 08:58:03 Tower kernel: nf_hook_slow+0x37/0x96
    Jun 25 08:58:03 Tower kernel: ip_local_deliver+0xa7/0xd5
    Jun 25 08:58:03 Tower kernel: ? ip_sublist_rcv_finish+0x53/0x53
    Jun 25 08:58:03 Tower kernel: ip_rcv+0x9e/0xbc
    Jun 25 08:58:03 Tower kernel: ? ip_rcv_finish_core.isra.0+0x2e2/0x2e2
    Jun 25 08:58:03 Tower kernel: __netif_receive_skb_one_core+0x4d/0x69
    Jun 25 08:58:03 Tower kernel: process_backlog+0x7e/0x116
    Jun 25 08:58:03 Tower kernel: net_rx_action+0x10b/0x274
    Jun 25 08:58:03 Tower kernel: __do_softirq+0xce/0x1e2
    Jun 25 08:58:03 Tower kernel: do_softirq_own_stack+0x2a/0x40
    Jun 25 08:58:03 Tower kernel: </IRQ>
    Jun 25 08:58:03 Tower kernel: do_softirq+0x4d/0x59
    Jun 25 08:58:03 Tower kernel: netif_rx_ni+0x1c/0x22
    Jun 25 08:58:03 Tower kernel: macvlan_broadcast+0x10f/0x153 [macvlan]
    Jun 25 08:58:03 Tower kernel: ? __switch_to_asm+0x34/0x70
    Jun 25 08:58:03 Tower kernel: macvlan_process_broadcast+0xd5/0x131 [macvlan]
    Jun 25 08:58:03 Tower kernel: process_one_work+0x16e/0x24f
    Jun 25 08:58:03 Tower kernel: ? pwq_unbound_release_workfn+0xb7/0xb7
    Jun 25 08:58:03 Tower kernel: worker_thread+0x1dc/0x2ac
    Jun 25 08:58:03 Tower kernel: kthread+0x10b/0x113
    Jun 25 08:58:03 Tower kernel: ? kthread_park+0x71/0x71
    Jun 25 08:58:03 Tower kernel: ret_from_fork+0x35/0x40
    Jun 25 08:58:03 Tower kernel: ---[ end trace 8e776a9dbe3bcaea ]---

     

    Thank you!

  6. I tried running the command that you give in the thread there, and I get the following:

    Quote

    ERROR: error during balancing '/mnt/cache': Read-only file system There may be more info in syslog - try dmesg | tail

    How can I force the cache to mount not read only?

     

    Some additional Errors when I go to mount the system:

    Quote

    May 30 13:04:57 AncientValeMkII kernel: BTRFS critical (device sdg1): corrupt leaf: root=2 block=655830777856 slot=187, bad key order, prev (15670389631644491776 229 4294936715) current (378999640064 168 208896) May 30 13:04:57 AncientValeMkII kernel: BTRFS: error (device sdg1) in btrfs_run_delayed_refs:2935: errno=-5 IO failure May 30 13:04:57 AncientValeMkII kernel: BTRFS info (device sdg1): forced readonly May 30 13:05:00 AncientValeMkII kernel: BTRFS info (device loop2): disk space caching is enabled May 30 13:05:00 AncientValeMkII kernel: BTRFS info (device loop2): has skinny extents May 30 13:05:00 AncientValeMkII kernel: BTRFS warning (device loop2): log replay required on RO media May 30 13:05:00 AncientValeMkII root: mount: /var/lib/docker: can't read superblock on /dev/loop2. May 30 13:05:00 AncientValeMkII kernel: BTRFS error (device loop2): open_ctree failed May 30 13:05:00 AncientValeMkII root: mount error May 30 13:05:00 AncientValeMkII emhttpd: shcmd (698): exit status: 1 May 30 13:05:00 AncientValeMkII emhttpd: nothing to sync May 30 13:05:00 AncientValeMkII unassigned.devices: Mounting 'Auto Mount' Remote Shares... May 30 13:10:00 AncientValeMkII root: Fix Common Problems Version 2019.05.29 May 30 13:10:06 AncientValeMkII root: Fix Common Problems: Error: Unable to write to cache

     

  7. When i try to recreate the docker image I get the following:

    Quote

    May 30 12:27:36 AncientValeMkII root: ERROR: unable to resize '/var/lib/docker': Read-only file system

    And when I look at the check filesystem status, I get the below:

    Quote

    [1/7] checking root items [2/7] checking extents ERROR: ignore invalid data extent, length 18446612733170907240 is not aligned to 4096 bad key ordering 186 187 bad block 655830777856 ERROR: errors found in extent allocation tree or chunk allocation [3/7] checking free space cache there is no free space entry for 378999562240-379051835392 cache appears valid but isn't 377978093568 there is no free space entry for 18446613197078861928-463908003840 cache appears valid but isn't 463877439488 [4/7] checking fs roots [5/7] checking only csums items (without verifying data) bad key ordering 186 187 Error looking up extent record -1 csum exists for 378992041984-378992168960 but there is no extent record bad key ordering 186 187 Error looking up extent record -1 csum exists for 378992173056-378992287744 but there is no extent record bad key ordering 186 187 Error looking up extent record -1 csum exists for 378992291840-378992381952 but there is no extent record bad key ordering 186 187 Error looking up extent record -1 csum exists for 378992386048-378992746496 but there is no extent record bad key ordering 186 187 Error looking up extent record -1 csum exists for 378992750592-378992889856 but there is no extent record bad key ordering 186 187 Error looking up extent record -1 csum exists for 378992893952-378993352704 but there is no extent record bad key ordering 186 187 Error looking up extent record -1 csum exists for 378993356800-378993696768 but there is no extent record bad key ordering 186 187 Error looking up extent record -1 csum exists for 378993700864-378994229248 but there is no extent record bad key ordering 186 187 Error looking up extent record -1 csum exists for 378994233344-378994319360 but there is no extent record bad key ordering 186 187 Error looking up extent record -1 csum exists for 378994323456-378994823168 but there is no extent record bad key ordering 186 187 Error looking up extent record -1 csum exists for 378994827264-378994872320 but there is no extent record bad key ordering 186 187 Error looking up extent record -1 csum exists for 378994876416-378994946048 but there is no extent record bad key ordering 186 187 Error looking up extent record -1 csum exists for 378994950144-378995458048 but there is no extent record bad key ordering 186 187 Error looking up extent record -1 csum exists for 378995462144-378995490816 but there is no extent record bad key ordering 186 187 Error looking up extent record -1 csum exists for 378995494912-378996244480 but there is no extent record bad key ordering 186 187 Error looking up extent record -1 csum exists for 378996248576-378996584448 but there is no extent record bad key ordering 186 187 Error looking up extent record -1 csum exists for 378996588544-378996953088 but there is no extent record bad key ordering 186 187 Error looking up extent record -1 csum exists for 378996957184-378998325248 but there is no extent record bad key ordering 186 187 Error looking up extent record -1 csum exists for 378998329344-378999164928 but there is no extent record bad key ordering 186 187 Error looking up extent record -1 csum exists for 378999169024-378999427072 but there is no extent record bad key ordering 186 187 Error looking up extent record -1 csum exists for 378999431168-378999869440 but there is no extent record bad key ordering 186 187 Error looking up extent record -1 csum exists for 378999873536-378999992320 but there is no extent record ERROR: errors found in csum tree [6/7] checking root refs [7/7] checking quota groups skipped (not enabled on this FS) Opening filesystem to check... Checking filesystem on /dev/sdg1 UUID: 6f423ce3-1272-4cd6-a6bd-285607db665f found 133638533120 bytes used, error(s) found total csum bytes: 0 total tree bytes: 105627648 total fs tree bytes: 0 total extent tree bytes: 105463808 btree space waste bytes: 25854939 file data blocks allocated: 61014016 referenced 61014016

     

  8. I am getting my Cache Drive (not pool) mounting as read only and very few of my docker containers are able to start. I have attached my Full Diag zip file below, and I did run a readonly BTRFS check on the cache drive, but I am at a loss on how to repair it as it isnt a pool so SCRUB isnt working. SCRUB actually is aborted when I click the button immediately. I also have these errors in my log:

     

    Quote

    May 30 11:50:19 AncientValeMkII kernel: loop: Write error at byte offset 4255567872, length 4096.
    May 30 11:50:19 AncientValeMkII kernel: print_req_error: I/O error, dev loop2, sector 8311656
    May 30 11:50:19 AncientValeMkII kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 4, rd 0, flush 0, corrupt 0, gen 0
    May 30 11:50:19 AncientValeMkII kernel: loop: Write error at byte offset 4255567872, length 4096.
    May 30 11:50:19 AncientValeMkII kernel: print_req_error: I/O error, dev loop2, sector 8311656
    May 30 11:50:19 AncientValeMkII kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 5, rd 0, flush 0, corrupt 0, gen 0
    May 30 11:50:20 AncientValeMkII kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 6, rd 0, flush 0, corrupt 0, gen 0
    May 30 11:50:20 AncientValeMkII kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 7, rd 0, flush 0, corrupt 0, gen 0
    May 30 11:50:20 AncientValeMkII kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 8, rd 0, flush 0, corrupt 0, gen 0
    May 30 11:50:20 AncientValeMkII kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 9, rd 0, flush 0, corrupt 0, gen 0
    May 30 11:50:20 AncientValeMkII kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 10, rd 0, flush 0, corrupt 0, gen 0
    May 30 11:50:20 AncientValeMkII kernel: BTRFS: error (device loop2) in btrfs_commit_transaction:2236: errno=-5 IO failure (Error while writing out transaction)

     

    ancientvalemkii-diagnostics-20190530-1559.zip

  9. Hi,

     

    I was currently on Nextcloud 13.05 and the next newest update wasa 14.3, so following the instructions about a page back I initiated the update to 14.03 from within nextcloud itself. That was about 12 hours ago, and when I got to login all I get is 

     

    "This Nextcloud instance is currently in maintenance mode, which may take a while.

    This page will refresh itself when the Nextcloud instance is available again.

    Contact your system administrator if this message persists or appeared unexpectedly.

    Thank you for your patience."

     

    The logs are also unhelpful as the only warnings are:

     

    PHP Warning: require_once(/config/www/nextcloud/lib/versioncheck.php): failed to open stream: No such file or directory in /config/www/nextcloud/cron.php on line 37
    PHP Fatal error: require_once(): Failed opening required '/config/www/nextcloud/lib/versioncheck.php' (include_path='.:/usr/share/php7') in /config/www/nextcloud/cron.php on line 37

     

     

    Any asssstance would be greatly appreciated. This instance was set up using SPO's instructions.

     

×
×
  • Create New...