Jump to content

snowboardjoe

Members
  • Posts

    261
  • Joined

  • Last visited

Posts posted by snowboardjoe

  1. Ouch, that was painful. Got to the point I could not load anything from the UI unless I knew what the exact URL was. I saw in the release notes that the multi minute delay was possible. IMHO, if there would be that much of an impact, it should not have been released that way. This was very disruptive.

     

    I've removed the plugin for now until it's stable again.

  2. This tripped me up today as it rendered several containers as orphans. When trying to fix the problem, I kept getting errors where certain null values would render the container as an orphan rather than just erroring out and returning me back to the configuration screen. I'm now aware of this patch after more research and some feedback. Fortunately I was able to fix the problem with new templates for my containers.

     

    I know this is not an unraid bug, but if there are changes that can break things (and we've been relying on them a long time), this needs to be called out in the upgrade notes.

  3. Running 6.12.6 here. A few weeks ago when I ran into a btrfs emergency I was using the restore command to copy the contents of that to an externally attached SSD (Samsung T7 and formatted as hfsplus). I was able to copy the contents of that restore there for safe keeping while I did my btrfs repairs. A week later I realized my external drive was not attached to my personal system (mac mini). I went to plug it in and it would not mount. It had power, but nothing could see it. I used Samsung Magician would could see it, but said there were no mountable systems. It knew there was data there, but could not do anything with it. I was rather stunned by this and thought maybe the drive died.

     

    On a hunch, I took the drive and plugged it back into unraid and mounted it. All data was there (old and new data). Uh, ok, this is odd. It looks perfectly fine. Detached it from unraid and back into Mac mini and it mounted immediately as if nothing ever happened. All data was present as expected. Any thoughts on what might have happened here? Happy my data is fine, but very strange behavior. It's as if the first mount on unraid left the filesystem in some bad state, but repeating the operation set things straight again.

  4. 1 hour ago, snowboardjoe said:

    Yeah, going to dig back to older backups too and see if I can get past the corruption. Check your event log and make sure there are no new errors involving the database. I wish I had know there was a problem much earlier.

     

    Still not working very well. Jumped to a backup from almost a month ago and still showing database errors. Sonarr is running, but constant database error events.

  5. Found the resources on how to restore and did that from backup on 27 Dec. Had to roll back to earlier version to match that backup. WebUI is restored and performing updates, but still seems like there is corruption in the database.

     

    23:11	HousekeepingService	Error running housekeeping task: CleanupCommandQueue: database disk image is malformed database disk image is malformed	
    23:11	CommandExecutor	Error occurred while executing task MessagingCleanup: database disk image is malformed database disk image is malformed	
    23:10	EventAggregator	CommandQueueManager failed while processing [ApplicationStartedEvent]: database disk image is malformed database disk image is malformed

     

  6. Sonarr container is offline. Getting some ugly database errors on startup. Pasting the main bits here and attaching the full log.

     

    While Processing:
    "INSERT INTO "Commands_temp" ("Id", "Name", "Body", "Priority", "Status", "QueuedAt", "StartedAt", "EndedAt", "Duration", "Exception", "Trigger", "Result") SELECT "Id", "Name", "Body", "Priority", "Status", "QueuedAt", "StartedAt", "EndedAt", "Duration", "Exception", "Trigger", "Result" FROM "Commands""
     ---> code = Constraint (19), message = System.Data.SQLite.SQLiteException (0x800027AF): constraint failed
    NOT NULL constraint failed: Commands_temp.Body
    
    [Info] NzbDrone.Core.Datastore.Migration.Framework.NzbDroneSQLiteProcessor: Rolling back transaction 
    
    [Fatal] ConsoleApp: EPIC FAIL! 
    
    [v4.0.0.748] NzbDrone.Common.Exceptions.SonarrStartupException: Sonarr failed to start: Error creating main database
     ---> System.Exception: constraint failed
    NOT NULL constraint failed: Commands_temp.Body
    While Processing:
    "INSERT INTO "Commands_temp" ("Id", "Name", "Body", "Priority", "Status", "QueuedAt", "StartedAt", "EndedAt", "Duration", "Exception", "Trigger", "Result") SELECT "Id", "Name", "Body", "Priority", "Status", "QueuedAt", "StartedAt", "EndedAt", "Duration", "Exception", "Trigger", "Result" FROM "Commands""
     ---> code = Constraint (19), message = System.Data.SQLite.SQLiteException (0x800027AF): constraint failed
    NOT NULL constraint failed: Commands_temp.Body
    
    Press enter to exit...
    Non-recoverable failure, waiting for user intervention...
    

     

    Any ideas what may be happening here? Seems it started a few days ago when I did an update this past weekend and did not realize it was down.

     

    More data I found in the logs when the database problem started:

     

    2023-12-28 06:19:11.1|Info|ExistingOtherExtraImporter|Found 0 existing other extra files
    2023-12-28 06:19:11.1|Info|ExistingExtraFileService|Found 0 possible extra files, imported 0 files.
    2023-12-28 06:20:50.7|Error|CommandExecutor|Error occurred while executing task MessagingCleanup
    
    [v3.0.10.1567] code = Corrupt (11), message = System.Data.SQLite.SQLiteException (0x800007EF): database disk image is ma
    lformed
    database disk image is malformed
      at System.Data.SQLite.SQLite3.Reset (System.Data.SQLite.SQLiteStatement stmt) [0x00088] in <cf516e4846354910b3d60749c8
    94b1bf>:0 
      at System.Data.SQLite.SQLite3.Step (System.Data.SQLite.SQLiteStatement stmt) [0x0006e] in <cf516e4846354910b3d60749c89
    4b1bf>:0 
      at System.Data.SQLite.SQLiteDataReader.NextResult () [0x00174] in <cf516e4846354910b3d60749c894b1bf>:0 
      at System.Data.SQLite.SQLiteDataReader..ctor (System.Data.SQLite.SQLiteCommand cmd, System.Data.CommandBehavior behave
    ) [0x0008e] in <cf516e4846354910b3d60749c894b1bf>:0 
      at (wrapper remoting-invoke-with-check) System.Data.SQLite.SQLiteDataReader..ctor(System.Data.SQLite.SQLiteCommand,Sys
    tem.Data.CommandBehavior)
      at System.Data.SQLite.SQLiteCommand.ExecuteReader (System.Data.CommandBehavior behavior) [0x0000c] in <cf516e484635491
    0b3d60749c894b1bf>:0 
      at System.Data.SQLite.SQLiteCommand.ExecuteNonQuery (System.Data.CommandBehavior behavior) [0x00006] in <cf516e4846354
    910b3d60749c894b1bf>:0 

     

    sonarr_database_error.txt

  7. Need help troubleshooting install that is no longer working. I had a BTRFS issue earlier that needed repair and this one app (out of about 15) did not come back. When I try to start it it shutdown within 1 second. Nothing in the logs other than declaring the service is starting. What's the best way to debug this?

  8. Went through several other steps, but did not make any progress. Ran restore to an external drive and that seemed to grab everything just fine. Then as a last ditch effort I ran the repair command:

     

    root@laffy:/etc# btrfs check --repair --force /dev/sdg1
    enabling repair mode
    Opening filesystem to check...
    WARNING: filesystem mounted, continuing because of --force
    Checking filesystem on /dev/sdg1
    UUID: 8bdd3d07-cbb0-4d53-a9f2-da67099186ea
    [1/7] checking root items
    Fixed 0 roots.
    [2/7] checking extents
    parent transid verify failed on 2151677952 wanted 14578466 found 14578474
    parent transid verify failed on 2151677952 wanted 14578466 found 14578474
    parent transid verify failed on 2151677952 wanted 14578466 found 14578474
    Ignoring transid failure
    super bytes used 258022117376 mismatches actual used 258022100992
    parent transid verify failed on 2151677952 wanted 14578466 found 14578474
    Ignoring transid failure
    No device size related problem found
    [3/7] checking free space tree
    [4/7] checking fs roots
    parent transid verify failed on 2151677952 wanted 14578466 found 14578474
    Ignoring transid failure
    
    ...identical reponses delted for bervity...
    
    [5/7] checking only csums items (without verifying data)
    parent transid verify failed on 2151677952 wanted 14578466 found 14578474
    Ignoring transid failure
    [6/7] checking root refs
    Recowing metadata block 2151677952
    parent transid verify failed on 2151677952 wanted 14578466 found 14578474
    Ignoring transid failure
    [7/7] checking quota groups skipped (not enabled on this FS)
    found 516044218368 bytes used, no error found
    total csum bytes: 415007296
    total tree bytes: 1490927616
    total fs tree bytes: 743276544
    total extent tree bytes: 180305920
    btree space waste bytes: 374203586
    file data blocks allocated: 943655108608
     referenced 490961231872

     

    Ran check one more time and it came back clean. Remounted filesystem as RW and remained stable (normally it would go RO in about 30 seconds). Did a full reboot and all services are back online including all containers. Watching for stability at this point.

  9. Making a copy of all data that I can to a separate disk. Turns out I do not have full backups of everything as I thought. Then I'm going to take a look at this item I found and see if there is a way to recover to an older metablock (usebackuproot).

     

    https://archive.kernel.org/oldwiki/btrfs.wiki.kernel.org/index.php/FAQ.html#How_do_I_recover_from_a_.22parent_transid_verify_failed.22_error.3F

     

    I fear this entire filesystem may be unrecoverable, but still fighting a little more.

  10. /mnt/cache is still in a RO state:
     

    root@laffy:/var/lib/btrfs# btrfs scrub start -B /mnt/cache 
    ERROR: scrubbing /mnt/cache failed for device id 1: ret=-1, errno=30 (Read-only file system)
    ERROR: scrubbing /mnt/cache failed for device id 2: ret=-1, errno=30 (Read-only file system)
    scrub canceled for 8bdd3d07-cbb0-4d53-a9f2-da67099186ea
    Scrub started:    Mon Dec 18 07:40:35 2023
    Status:           aborted
    Duration:         0:00:00
    Total to scrub:   0.00B
    Rate:             0.00B/s
    Error summary:    no errors found

     

    I don't know how to clear this condition.

  11. Memtest ran all night, 9 times and was clean. So, I'm just operating with 50% of RAM before this whole mess started. Ran scrub and everything came back with no errors. I rebooted again, but Docker won't come up because it's in RO mode:

     

    Dec 18 07:19:31 laffy emhttpd: shcmd (91): /usr/local/sbin/mount_image '/mnt/cache/appdata/docker.img' /var/lib/docker 32
    Dec 18 07:19:31 laffy root: truncate: cannot open '/mnt/cache/appdata/docker.img' for writing: Read-only file system
    Dec 18 07:19:31 laffy kernel: loop2: detected capacity change from 0 to 67108864
    Dec 18 07:19:31 laffy kernel: BTRFS: device fsid 51f116da-f032-424a-9b4c-da5dd26d4a6f devid 1 transid 4251808 /dev/loop2 scanned by mount (10258)
    Dec 18 07:19:31 laffy kernel: BTRFS info (device loop2): using crc32c (crc32c-intel) checksum algorithm
    Dec 18 07:19:31 laffy kernel: BTRFS info (device loop2): using free space tree
    Dec 18 07:19:31 laffy kernel: BTRFS info (device loop2): enabling ssd optimizations
    Dec 18 07:19:31 laffy kernel: BTRFS info (device loop2): start tree-log replay
    Dec 18 07:19:31 laffy kernel: BTRFS warning (device loop2): log replay required on RO media
    Dec 18 07:19:31 laffy root: mount: /var/lib/docker: can't read superblock on /dev/loop2.
    Dec 18 07:19:31 laffy root:        dmesg(1) may have more information after failed mount system call.
    Dec 18 07:19:31 laffy root: mount error
    Dec 18 07:19:31 laffy kernel: BTRFS error (device loop2): open_ctree failed
    Dec 18 07:19:31 laffy emhttpd: shcmd (91): exit status: 1

     

    Looking at dmesg:

     

    [Mon Dec 18 07:19:25 2023] BTRFS info (device sdg1): using crc32c (crc32c-intel) checksum algorithm
    [Mon Dec 18 07:19:25 2023] BTRFS info (device sdg1): using free space tree
    [Mon Dec 18 07:19:25 2023] BTRFS info (device sdg1): enabling ssd optimizations
    [Mon Dec 18 07:19:25 2023] BTRFS info (device sdg1): start tree-log replay
    [Mon Dec 18 07:19:25 2023] BTRFS info (device sdg1): checking UUID tree
    [Mon Dec 18 07:19:25 2023] BTRFS error (device sdg1): parent transid verify failed on logical 2151677952 mirror 1 wanted 14578466 found 14578474
    [Mon Dec 18 07:19:25 2023] BTRFS error (device sdg1): parent transid verify failed on logical 2151677952 mirror 2 wanted 14578466 found 14578474
    [Mon Dec 18 07:19:25 2023] BTRFS: error (device sdg1: state A) in do_free_extent_accounting:2845: errno=-5 IO failure
    [Mon Dec 18 07:19:25 2023] BTRFS info (device sdg1: state EA): forced readonly
    [Mon Dec 18 07:19:25 2023] BTRFS error (device sdg1: state EA): failed to run delayed ref for logical 2465001472 num_bytes 12288 type 178 action 2 ref_mod 1: -5
    [Mon Dec 18 07:19:25 2023] BTRFS: error (device sdg1: state EA) in btrfs_run_delayed_refs:2149: errno=-5 IO failure
    [Mon Dec 18 07:19:25 2023] BTRFS info (device sdg1: state EMA): turning on async discard
    [Mon Dec 18 07:19:25 2023] BTRFS error (device sdg1: state EMA): Remounting read-write after error is not allowed
    [Mon Dec 18 07:19:30 2023] loop2: detected capacity change from 0 to 67108864
    [Mon Dec 18 07:19:30 2023] BTRFS: device fsid 51f116da-f032-424a-9b4c-da5dd26d4a6f devid 1 transid 4251808 /dev/loop2 scanned by mount (10258)
    [Mon Dec 18 07:19:30 2023] BTRFS info (device loop2): using crc32c (crc32c-intel) checksum algorithm
    [Mon Dec 18 07:19:30 2023] BTRFS info (device loop2): using free space tree
    [Mon Dec 18 07:19:30 2023] BTRFS info (device loop2): enabling ssd optimizations
    [Mon Dec 18 07:19:30 2023] BTRFS info (device loop2): start tree-log replay
    [Mon Dec 18 07:19:30 2023] BTRFS warning (device loop2): log replay required on RO media
    [Mon Dec 18 07:19:30 2023] BTRFS error (device loop2): open_ctree failed

     

    So, it looks like some replay of a journal is needed here? Researching that error. Attaching diagnostics in current state.

    laffy-diagnostics-20231218-0728.zip

  12. I tested RAM with two sticks and it passed. Ran check with the other two sticks, that passed. When all 4 sticks are installed, memtest fails. Ugh! Just looked up the RAM compatibility list and discovered that 4 stick of ram is not supported. I don't know how it worked for almost a year like this. 

     

    New game plan:

    1. Let memtest run through the night with the existing ram and verify no errors.
    2. Boot unraid in safe mode.
    3. Run btrfs scrub against cache pool and check results.

    Open to other suggestions in the hope I can recover data, but since I'm not well versed with btrfs, I want to make sure I'm following best practices here to prevent further damage. Again, my logs said there was only a problem with /dev/sdg1, so I'm hoping damage is limited to that and could work around that by using other mirror slice to get things back in shape.

  13. I upgraded from 6.12.4 to 6.12.6 this morning and everything seemed to be running fine. At 6:36pm this evening everything became unstable with Docker. Plex stopped working was the first sign and started digging in the issue. These were the very first few events at that time in my syslog:

     

    [Sun Dec 17 18:36:57 2023] BTRFS error (device sdg1): parent transid verify failed on logical 2151677952 mirror 0 wanted 14578466 found 14578474
    [Sun Dec 17 18:36:57 2023] BTRFS error (device sdg1): parent transid verify failed on logical 2151677952 mirror 1 wanted 14578466 found 14578474
    [Sun Dec 17 18:36:57 2023] BTRFS error (device sdg1): parent transid verify failed on logical 2151677952 mirror 0 wanted 14578466 found 14578474
    [Sun Dec 17 18:36:57 2023] BTRFS error (device sdg1): parent transid verify failed on logical 2151677952 mirror 0 wanted 14578466 found 14578474
    [Sun Dec 17 18:36:57 2023] BTRFS error (device sdg1): parent transid verify failed on logical 2151677952 mirror 0 wanted 14578466 found 14578474
    [Sun Dec 17 18:36:57 2023] BTRFS error (device sdg1): parent transid verify failed on logical 2151677952 mirror 2 wanted 14578466 found 14578474
    [Sun Dec 17 18:36:57 2023] BTRFS error (device sdg1): parent transid verify failed on logical 2151677952 mirror 0 wanted 14578466 found 14578474
    [Sun Dec 17 18:36:57 2023] BTRFS: error (device sdg1: state A) in btrfs_finish_ordered_io:3319: errno=-5 IO failure
    [Sun Dec 17 18:36:57 2023] BTRFS error (device sdg1: state A): parent transid verify failed on logical 2151677952 mirror 0 wanted 14578466 found 14578474
    [Sun Dec 17 18:36:57 2023] BTRFS error (device sdg1: state EA): parent transid verify failed on logical 2151677952 mirror 0 wanted 14578466 found 14578474
    [Sun Dec 17 18:36:57 2023] BTRFS info (device sdg1: state EA): forced readonly
    [Sun Dec 17 18:36:57 2023] BTRFS error (device sdg1: state EA): parent transid verify failed on logical 2151677952 mirror 1 wanted 14578466 found 14578474
    [Sun Dec 17 18:36:57 2023] BTRFS: error (device sdg1: state EA) in btrfs_finish_ordered_io:3319: errno=-5 IO failure
    [Sun Dec 17 18:36:57 2023] BTRFS: error (device sdg1: state EA) in btrfs_finish_ordered_io:3319: errno=-5 IO failure
    [Sun Dec 17 18:36:57 2023] BTRFS: error (device sdg1: state EA) in btrfs_finish_ordered_io:3319: errno=-5 IO failure
    [Sun Dec 17 18:36:57 2023] BTRFS: error (device sdg1: state EA) in btrfs_finish_ordered_io:3319: errno=-5 IO failure
    [Sun Dec 17 18:36:57 2023] BTRFS: error (device sdg1: state EA) in btrfs_finish_ordered_io:3319: errno=-5 IO failure
    [Sun Dec 17 18:36:57 2023] BTRFS: error (device sdg1: state EA) in btrfs_finish_ordered_io:3319: errno=-5 IO failure
    [Sun Dec 17 18:36:57 2023] BTRFS: error (device sdg1: state EA) in btrfs_finish_ordered_io:3319: errno=-5 IO failure

     

    I was able to reboot and the system came back online. However, I immediately started seeing lots of BTRFS errors with /dev/sdg1. That drive is one of two drives in my cache pool. According to dashboard it thinks everything is fine. Still verifying things in logs to see what is going on. I've not dealt with BTRFS errors like this before.

     

    Going to get basic array stable again with no Docker to get some stability back. I'm thinking this is just a timing coincidence with a SDD failure around the same time I chose to upgrade today? Open to suggestions to diagnose and restore services here.

  14. I just started getting this error as well after upgrading to 6.12.6 this morning. Plex was running all day and this evening stopped responding. When I bring up Docker, it has no stats for any container. Here is syslog:

     

    Dec 17 18:37:00 laffy kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 9, rd 0, flush 0, corrupt 0, gen 0
    Dec 17 18:37:00 laffy kernel: I/O error, dev loop2, sector 2556800 op 0x1:(WRITE) flags 0x1800 phys_seg 4 prio class 2
    Dec 17 18:37:00 laffy kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 10, rd 0, flush 0, corrupt 0, gen 0
    Dec 17 18:37:00 laffy kernel: BTRFS: error (device loop2) in btrfs_commit_transaction:2494: errno=-5 IO failure (Error while writing out transaction)
    Dec 17 18:37:00 laffy kernel: BTRFS info (device loop2: state E): forced readonly
    Dec 17 18:37:00 laffy kernel: BTRFS warning (device loop2: state E): Skipping commit of aborted transaction.
    Dec 17 18:37:00 laffy kernel: BTRFS: error (device loop2: state EA) in cleanup_transaction:1992: errno=-5 IO failure
    Dec 17 18:37:04 laffy kernel: docker0: port 2(vethf809026) entered disabled state
    Dec 17 18:37:04 laffy kernel: veth3f981c1: renamed from eth0
    Dec 17 18:37:09 laffy kernel: lo_write_bvec: 25 callbacks suppressed
    Dec 17 18:37:09 laffy kernel: loop: Write error at byte offset 14684160, length 4096.
    Dec 17 18:37:09 laffy kernel: loop: Write error at byte offset 2864652288, length 4096.
    Dec 17 18:37:09 laffy kernel: blk_print_req_error: 25 callbacks suppressed
    Dec 17 18:37:09 laffy kernel: I/O error, dev loop2, sector 28680 op 0x1:(WRITE) flags 0x100000 phys_seg 1 prio class 2
    Dec 17 18:37:09 laffy kernel: loop: Write error at byte offset 2728972288, length 4096.
    Dec 17 18:37:09 laffy kernel: loop: Write error at byte offset 2728976384, length 4096.
    Dec 17 18:37:09 laffy kernel: btrfs_dev_stat_inc_and_print: 25 callbacks suppressed
    Dec 17 18:37:09 laffy kernel: I/O error, dev loop2, sector 5330024 op 0x1:(WRITE) flags 0x100000 phys_seg 1 prio class 2
    Dec 17 18:37:09 laffy kernel: BTRFS error (device loop2: state EA): bdev /dev/loop2 errs: wr 36, rd 0, flush 0, corrupt 0, gen 0
    Dec 17 18:37:09 laffy kernel: BTRFS error (device loop2: state EA): bdev /dev/loop2 errs: wr 37, rd 0, flush 0, corrupt 0, gen 0
    Dec 17 18:37:09 laffy kernel: I/O error, dev loop2, sector 5330032 op 0x1:(WRITE) flags 0x100000 phys_seg 1 prio class 2
    Dec 17 18:37:09 laffy kernel: BTRFS error (device loop2: state EA): bdev /dev/loop2 errs: wr 38, rd 0, flush 0, corrupt 0, gen 0
    Dec 17 18:37:09 laffy kernel: I/O error, dev loop2, sector 5595024 op 0x1:(WRITE) flags 0x100000 phys_seg 1 prio class 2
    Dec 17 18:37:09 laffy kernel: BTRFS error (device loop2: state EA): bdev /dev/loop2 errs: wr 39, rd 0, flush 0, corrupt 0, gen 0
    Dec 17 18:37:30 laffy kernel: loop: Write error at byte offset 2541936640, length 4096.
    Dec 17 18:37:30 laffy kernel: loop: Write error at byte offset 2542256128, length 4096.
    Dec 17 18:37:30 laffy kernel: loop: Write error at byte offset 2542260224, length 4096.
    Dec 17 18:37:30 laffy kernel: I/O error, dev loop2, sector 4964720 op 0x1:(WRITE) flags 0x100000 phys_seg 1 prio class 2
    Dec 17 18:37:30 laffy kernel: BTRFS error (device loop2: state EA): bdev /dev/loop2 errs: wr 40, rd 0, flush 0, corrupt 0, gen 0
    Dec 17 18:37:30 laffy kernel: I/O error, dev loop2, sector 4965344 op 0x1:(WRITE) flags 0x100000 phys_seg 1 prio class 2
    Dec 17 18:37:30 laffy kernel: BTRFS error (device loop2: state EA): bdev /dev/loop2 errs: wr 41, rd 0, flush 0, corrupt 0, gen 0
    Dec 17 18:37:30 laffy kernel: I/O error, dev loop2, sector 4965352 op 0x1:(WRITE) flags 0x100000 phys_seg 1 prio class 2
    Dec 17 18:37:30 laffy kernel: BTRFS error (device loop2: state EA): bdev /dev/loop2 errs: wr 42, rd 0, flush 0, corrupt 0, gen 0

     

    I've never seen BTRFS errors before.

  15. Since upgrading to 2.1.1 the client is still seeding existing torrents, but I'm unable to any new ones. When I try to add a new URL I get this error in the logs:

     

    Traceback (most recent call last):
      File "/usr/lib/python3.11/site-packages/twisted/web/_newclient.py", line 1043, in dispatcher
        return func(*args, **kwargs)
      File "/usr/lib/python3.11/site-packages/twisted/web/_newclient.py", line 1594, in _finishResponse_WAITING
        self._giveUp(Failure(reason))
      File "/usr/lib/python3.11/site-packages/twisted/web/_newclient.py", line 1644, in _giveUp
        self._disconnectParser(reason)
      File "/usr/lib/python3.11/site-packages/twisted/web/_newclient.py", line 1633, in _disconnectParser
        parser.connectionLost(reason)
    --- <exception caught here> ---
      File "/usr/lib/python3.11/site-packages/twisted/web/_newclient.py", line 555, in connectionLost
        self.response._bodyDataFinished()
      File "/usr/lib/python3.11/site-packages/twisted/web/_newclient.py", line 1043, in dispatcher
        return func(*args, **kwargs)
      File "/usr/lib/python3.11/site-packages/twisted/web/_newclient.py", line 1283, in _bodyDataFinished_CONNECTED
        self._bodyProtocol.connectionLost(reason)
      File "/usr/lib/python3.11/site-packages/twisted/web/client.py", line 1432, in connectionLost
        self.original.connectionLost(reason)
      File "/usr/lib/python3.11/site-packages/deluge/httpdownloader.py", line 73, in connectionLost
        with open(self.agent.filename, 'wb') as _file:
    builtins.IsADirectoryError: [Errno 21] Is a directory: '/tmp/delugeweb-79lq129d/'
    

     

    I do see the directory created there, but still don't understand what the problem is.

     

    [root@8db4cdfa95b3 tmp]# ls -l
    total 52
    drwx------ 1 nobody users     0 Oct 22 20:06 delugeweb-0mcyk1et
    drwx------ 1 nobody users     0 Oct 22 20:05 delugeweb-1p815gjl
    drwx------ 1 nobody users     0 Oct 23 19:40 delugeweb-79lq129d
    drwx------ 1 nobody users     0 Oct 23 19:39 delugeweb-lojv_rhe
    drwx------ 1 nobody users     0 Oct 22 20:15 delugeweb-wa68rbic
    drwx------ 1 nobody users     0 Oct 22 20:08 delugeweb-wb6i4q_0
    drwx------ 1 nobody users     0 Oct 22 20:05 delugeweb-z71r_x_h
    -rw-rw-rw- 1 root   root    946 Oct 22 20:08 getiptables
    -rw-rw-rw- 1 root   root     15 Oct 22 20:08 getvpnextip
    -rw------- 1 root   root      0 Oct 22 20:08 start-script-stderr---supervisor-s2urs698.log
    -rw------- 1 root   root  22049 Oct 23 19:45 start-script-stdout---supervisor-1g8290c4.log
    -rw-rw-rw- 1 root   root      1 Oct 22 20:08 vpngatewayip
    -rw-rw-rw- 1 root   root      9 Oct 22 20:08 vpnip
    -rw------- 1 root   root   6995 Oct 23 19:41 watchdog-script-stderr---supervisor-15emil97.log
    -rw------- 1 root   root   1195 Oct 22 20:08 watchdog-script-stdout---supervisor-bmc8bdvw.log
    

     

    Any suggestions on on what to look at next?

×
×
  • Create New...