vurt

Members
  • Posts

    304
  • Joined

  • Last visited

Everything posted by vurt

  1. @JorgeB Sorry, hopefully this is the last question... my dockers are back, but sabnzbd is having difficulty starting up "Execution Error". I removed it, and reinstalled it from the user template, and still getting the error. This is the output from the install: docker run -d --name='sabnzbd' --net='lsio' -e TZ="America/New_York" -e HOST_OS="Unraid" -e HOST_HOSTNAME="Tower" -e HOST_CONTAINERNAME="sabnzbd" -e 'PUID'='99' -e 'PGID'='100' -l net.unraid.docker.managed=dockerman -l net.unraid.docker.webui='http://[IP]:[PORT:8080]/' -l net.unraid.docker.icon='https://raw.githubusercontent.com/linuxserver/docker-templates/master/linuxserver.io/img/sabnzbd-icon.png' -p '8080:8080/tcp' -p '9090:9090/tcp' -v '/mnt/user/appdata/downloads/':'/downloads':'rw' -v '/mnt/user/appdata/downloads/incomplete/':'/incomplete-downloads':'rw' -v '/mnt/user/appdata/sabnzbd':'/config':'rw' 'linuxserver/sabnzbd' 9a17b4e103d196ccd1bc70ca2424698d2fb1ff9bb8309c50a6e039963da7e688 docker: Error response from daemon: driver failed programming external connectivity on endpoint sabnzbd (8314f2297d197ff8245f283a14065cc78112e31c5c4cdfc1a5cd5b418f3eeb89): Error starting userland proxy: listen tcp4 0.0.0.0:9090: bind: address already in use. The command failed. I can see there's a conflict with the bind address but I don't know what that actually means or how to resolve. Thanks so much for your help!
  2. Whew, thank you! @JorgeB I also lost my dockers... from what I read, I can go to the Apps tab and look at Previous Apps and reinstall from there... would that be safe to do? [I'm not seeing a lost+found folder unless I'm looking in the wrong places, or maybe there isn't one?]
  3. Hi @JorgeB, thanks so much for the quick response. I ran -n, and then had to do -L. This is the output after the repair: Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being destroyed because the -L option was used. - scan filesystem freespace and inode maps... clearing needsrepair flag and regenerating metadata sb_ifree 33068, counted 33075 sb_fdblocks 2448669658, counted 2453742304 - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 - agno = 16 - agno = 17 - agno = 18 - agno = 19 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 2 - agno = 1 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 - agno = 16 - agno = 17 - agno = 18 - agno = 19 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... Maximum metadata LSN (35:2939140) is ahead of log (1:2). Format log to cycle 38. done
  4. Hi all... my setup mysteriously got this error. I've searched the forum and followed the diagnostic advice. This is the output of the filesystem check: Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - scan filesystem freespace and inode maps... sb_ifree 33068, counted 33075 sb_fdblocks 2448669658, counted 2453742304 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 - agno = 16 - agno = 17 - agno = 18 - agno = 19 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 3 - agno = 2 - agno = 1 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 - agno = 16 - agno = 17 - agno = 18 - agno = 19 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting. And attached is the log. Can someone please advice? This looks scary catastrophic. tower-diagnostics-20240308-1114.zip
  5. Oh does this mean I probably always had this error but the previous version wasn't reporting it? Can I click on /mnt/user/appdata/downloads and it'll be excluded from backup? I think this is the folder that's causing the error. It doesn't require backing up anyway.
  6. Thank you, that makes sense but wasn't an error previously. The appdata/downloads folder is used by Sabnzbd, Radarr, Sonarr, and Hydra.
  7. Yeah still getting 75% warning, is that odd if the CLI says 65% is used? The warning is always 75% though, it's not increasing. I increased the allocation to 30G to see how that goes.
  8. Hi there, recently updated to the latest Appdata.Backup so I migrated from the previous one. I'm getting this new error: tar creation failed! Tar said: tar: /mnt/user/appdata/downloads/incomplete: file changed as we read it This is the debug log ID that was sent: 030ae8ae-4efa-40f0-957a-cc7ab4d3e48c hope that works, never sent a debug log like this before!
  9. Here's the breakdown. I'm asking because I'm receiving warnings: "Docker image disk utilization of 75%." Picard, Calibre, and Deluge are far bigger than I expected.
  10. @SmartPhoneLover Thank you for this! I just came across gonic and thought to try it as replacement for navidrome. Are there set up considerations for unraid? Do I just install it from DockerHub? Also, do you know if gonic is capable of writing tags to the music files?
  11. Sorry I didn't contact them, I wanted to check in here if other SWAG users have encountered this. I noticed my SSL cert expired and when I went looking, I saw this on ZeroSSL. It looks like they've changed this so it's no longer free and unlimited, and it doesn't look like we can get wildcards at all unless we upgrade to their Premium plan. Looks self-explanatory but I wanted to confirm I'm not ... crazy or other SWAG users have worked around this.
  12. Has anyone encountered an issue with ZeroSSL? I didn't know this previously or it might be a change on their end, but I've ran out of "credits" on the free plan to renew my SSL cert.
  13. Hi fellow unraiders. I'm getting this error, could it be due to the recent unraid update? EPIC FAIL! NzbDrone.Core.Datastore.CorruptDatabaseException: Database file: /config/sonarr.db is corrupt, restore from backup if available. See: https://wiki.servarr.com/sonarr/faq#i-am-getting-an-error-database-disk-image-is-malformed ---> System.Data.SQLite.SQLiteException: database disk image is malformed database disk image is malformed
  14. Hi fellow Unraiders. I recently replaced and rebuilt a data drive. Subsequently, I'm having issues with Emby. The three users I set up are gone, I can't log in with their creds. Instead, I see a new user "bin" that I never created. I found this discussion on the Emby forum which pointed me at users.db, I also posted there for help. Per the Emby discussion, I opened up users.db, and I can see my three users in there. And there's no "bin" user. What could've happened and how do I fix this? In /data: In users.db:
  15. I'm a bit scared to update swag these days, and it doesn't look like there's a way to check the version I have installed if I need to roll back (besides going by time and testing for the right old version.) My current install is from 2 months ago. Are there critical changes? Is the certbot update critical? I'm going by what I read on the release page.
  16. Hi @apandey, thanks for clarifying! I'll replace it ASAP.
  17. Hi everyone! My parity disk is reporting errors so I ran the SMART self-test. This is what I got: Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Completed: read failure 90% 45560 21740232 # 2 Short offline Completed without error 00% 45559 - I can't replace the drive immediately. Is there anything else I can do?
  18. Redid my entire Swag from subfolder to subdomains, Audiobookshelf installed and up without a hitch, working with the Android app. Amazing. 👍
  19. I did, and modified the conf like this but it didn't work, still asked for password: # OPDS feed for eBook reader apps # Even if you use Authelia, the OPDS feed requires a password to be set for # the user directly in Calibre-Web, as eBook reader apps don't support # form-based logins, only HTTP Basic auth. location /opds/ { auth_basic off; include /config/nginx/proxy.conf; include /config/nginx/resolver.conf; set $upstream_app calibre-web; set $upstream_port 8083; set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port; proxy_set_header X-Scheme $scheme; } EDIT: Turned out it wasn't a reverse proxy issue for a change. Was on KOReader's end.
  20. Thanks for the link. I just realized the conf for calibre-web seems to have made an accomodation for OPDS: # OPDS feed for eBook reader apps # Even if you use Authelia, the OPDS feed requires a password to be set for # the user directly in Calibre-Web, as eBook reader apps don't support # form-based logins, only HTTP Basic auth. location /opds/ { include /config/nginx/proxy.conf; include /config/nginx/resolver.conf; set $upstream_app calibre-web; set $upstream_port 8083; set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port; proxy_set_header X-Scheme $scheme; } Does that suggest /opds doesn't need password? The / above has: location / { # enable the next two lines for http auth auth_basic "Restricted"; auth_basic_user_file /config/nginx/.htpasswd;
  21. If I have .htpasswd for password protection, is there a way to whitelist a specific url so it can be accessed without password? I want KOReader to browse my ebooks at https://calibre-web.mydomain.net/opds .