darknavi

Members
  • Posts

    17
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

darknavi's Achievements

Noob

Noob (1/14)

5

Reputation

  1. Me too! I'd love to jump into fediverse this way.
  2. You are correct, fixed. Thanks for finding that!
  3. FYI anyone that comes across this, I got an updated version of this app up in CA now. Still called "ArchiveTeam-Warrior" but by me, darknavi. It has a few extra params built in. New support thread:
  4. Template Repo: https://github.com/JakeShirley/unraid-templates/blob/main/archiveteam-warrior.xml Official Repos: https://hub.docker.com/r/archiveteam/warrior-dockerfile/ https://github.com/ArchiveTeam/warrior-dockerfile NOTE: This thread will be moved once my CA dev application is accepted.
  5. +1 to fixing this in the app template. It's trivial to add these variables to the config so it'd be nice if it was built in so the jobs automatically start.
  6. I updated the `VERSION` variable from "latest" to "1.23.3.4707-ebb5fe9f3" (I just looked through their tags which have Plex server versions, this is from ~2 weeks ago) and every thing seems better. Error spew in the docker/Unraid logs went away. Not sure what's up with latest PMS, but at least this works for now.
  7. Hey all! I am jumping on the bandwagon to report my Plex server seems to start up, but then crashes over and over again very often. In the docker logs I see this over and over again: libc++abi: terminating with uncaught exception of type soci::soci_error: Cannot begin transaction. database is locked ****** PLEX MEDIA SERVER CRASHED, CRASH REPORT WRITTEN: /config/Library/Application Support/Plex Media Server/Crash Reports/1.23.6.4810-15ce0e21a/PLEX MEDIA SERVER/8575f75a-9267-434f-35d880a7-f0e79b44.dmp Sqlite3: Sleeping for 200ms to retry Starting Plex Media Server. Sqlite3: Sleeping for 200ms to retry busy DB. Sqlite3: Sleeping for 200ms to retry busy DB. Sqlite3: Sleeping for 200ms to retry busy DB. Sqlite3: Sleeping for 200ms to retry busy DB. Sqlite3: Sleeping for 200ms to retry busy DB. Sqlite3: Sleeping for 200ms to retry busy DB. Sqlite3: Sleeping for 200ms to retry busy DB. Sqlite3: Sleeping for 200ms to retry busy DB. Sqlite3: Sleeping for 200ms to retry busy DB. Sqlite3: Sleeping for 200ms to retry busy DB. Sqlite3: Sleeping for 200ms to retry busy DB. Sqlite3: Sleeping for 200ms to retry busy DB. Sqlite3: Sleeping for 200ms to retry busy DB. Sqlite3: Sleeping for 200ms to retry busy DB. Sqlite3: Sleeping for 200ms to retry busy DB. Sqlite3: Sleeping for 200ms to retry busy DB. Sqlite3: Sleeping for 200ms to retry busy DB. Sqlite3: Sleeping for 200ms to retry busy DB. Sqlite3: Sleeping for 200ms to retry busy DB. Sqlite3: Sleeping for 200ms to retry busy DB. Sqlite3: Sleeping for 200ms to retry busy DB. Sqlite3: Sleeping for 200ms to retry busy DB. Sqlite3: Sleeping for 200ms to retry busy DB. libc++abi: terminating with uncaught exception of type soci::soci_error: Cannot begin transaction. database is locked ****** PLEX MEDIA SERVER CRASHED, CRASH REPORT WRITTEN: /config/Library/Application Support/Plex Media Server/Crash Reports/1.23.6.4810-15ce0e21a/PLEX MEDIA SERVER/1bacac55-0edc-4204-144f91a3-3aecb0db.dmp Sqlite3: Sleeping for 200ms to retry Starting Plex Media Server. Any ideas?
  8. This worked for me to transfer over my torrents (along with adding the correct/same volume mapping(s)). Now to port over my RSS feeds!
  9. Sorry if you didn't follow the original thread. The 50G image is a result of me hacking a slowly growing image over the last few months. 20GB would probably be OK for me now (or something like 25GB/30GB). Here is my current breakdown of utilization: root@R710:~# docker system df TYPE TOTAL ACTIVE SIZE RECLAIMABLE Images 46 46 15.62GB 529MB (3%) Containers 47 42 3.33GB 12.99MB (0%) Local Volumes 12 3 2.103GB 2.103GB (100%) Build Cache 0 0 0B 0B As you can see my "Images" section is pretty normal sized. I am fairly confident I don't have bad paths for growing log files. "Local Volumes" has been slowly growing over time. Here is what I get: root@R710:~# docker volume ls DRIVER VOLUME NAME local 2b3fd8a101a636d24f77adca42bfba22ce9eef095f372d5e21e3a83de6abe644 local 6fa5b3db957d0f1ad110142fa19277d740ab87d0651f50f6d976b654ee38c484 local 7c790e7db66fb1c8b412acf5c1519824659f52ed246e8487a4b71a85f79f04a6 local 35d47effc3710f615f67ba1692848b878210b7e24d377a908c653a95c43aa05f local 35946bfbd027c6a3742683142e038c00c1f79ea441ee7f03676406ed564006e6 local 75256fbd65f49846202273725e3239c15ccc62c62c87de563473135bd3c05a22 local 66307871c6655e337e3b97bc47fa25796d0d39ee612cac7915138a8a2d747cca local a929d72ea8682b4d713d0f21030605d935f61ef94746943a342dd6db06069612 local ac116bbd185278344fd3354661a7609a8748aba75f97e7fc39ea7d5c5f361410 local b913e3c1ed43bca6462d4e1c47d3b54069812d21abea9b3531eac3ed633bcf81 local c95d3d84752422e4bf91914c1f06e33dc9c7083650418210b03374c6492d55e2 local d358180b3b12a4169736e754d56c92cec93652ca5b36b91485438a029e40e89f THIS is what I have no idea to do with. One of these will appear every couple of days and slowly grow my docker usage. As for the System share, good call. I just replaced my cache drive and forgot to move them all over.
  10. Here you go! Sorry that I didn't do it earlier. It isn't a super bad problem but it'd be nice to get to the root cause. r710-diagnostics-20210405-0834.zip
  11. Glad to hear! A small update for me: Since this post (~1 month ago) I still have had volume appear locally. Still not sure what is causing it but it's so slow that it isn't a huge problem.
  12. I just wanted to update this post and say that I found it. I ran `docker system df` and got this: root@R710:~# docker system df TYPE TOTAL ACTIVE SIZE RECLAIMABLE Images 45 45 14.02GB 854.9MB (6%) Containers 46 44 1.385GB 0B (0%) Local Volumes 92 3 32.84GB 32.84GB (99%) Build Cache 0 0 0B 0B I then ran `docker volume ls` and got this: root@R710:~# docker volume ls DRIVER VOLUME NAME local 0b4691e8e6b9aec2534926bdfc7590b80965f62cf186403f61f03196699f9ac0 local 0beda7811fd2cd3df67969779c977facc9418104806d4d32041526b3ebb43201 local 0e5ab035ec4d05405c27274dcd982e15ab5db00e7830c731a534a530c19605b9 local 0e19cb19b26605635b403820ab0173f5bdd6215dc7a69c8482fe69194aa92ea2 local 1bacaa29fd67ffba9ffbcb89ea9906e010b3d4057c30d0bd8ab7eb134d182565 local 1cb582c12674d5db6713a72d2b2230c091669fff794b0d9f820c89cd91c03dc2 local 2b3fd8a101a636d24f77adca42bfba22ce9eef095f372d5e21e3a83de6abe644 local 2cdcfecc153e1d97fccf1458e9553ab014402174e1f76178716615949e9ce914 local 3a1419b39fb86ca55660c566ea05a33c2f1a147bbc26163d5bf19a3e0abc0b5b local 4a8a615b58c80801f15e9112d49abdd7363f5dadf467521cd685fc41dceff93f local 5a08afc0cf63dd4650c99db9ef1da025c404785a77b8f0675f27ff8aa75cd00b local 5d0e9992a26994b0d434914e1efca937ee964b7b9d31fe97940ebeec9d5f518b local 5d1e9eafc8d14e923d2fb8b1eb0aa097a054d24b649d88296ee350c8f2929afb local 5fc2bd93316a687c75cfba1d118b02d74969fd08247c38d68f367b36a12eae65 local 6a15ba58e38186c2c561c2424fe121402a4e7c14a6f2d2d187ec240eba57f433 local 6d40f326bc96305a7fc40967c6a0032d5ac487fdba638ee6d5f1326999962fe1 local 6f28d5504fa1a938e5fb4fbe9def328e884e800a43103ca53c95fa2b1076b4a8 local 6fa5b3db957d0f1ad110142fa19277d740ab87d0651f50f6d976b654ee38c484 local 8adfff2345b72bbaabc5dd9792b93da2dd6c6d1d43cf24949fe32b3efd3e2c40 local 8de9e42cf259e69632aaf72836298e27bafcf6d2623eaf6bd8ae47b18f5a5a7a local 8f8246cad62e63f5f39728d257b3c7799f49f4ce8b5d2dd7049b1af3a384526b local 08f5ab58b9fbafc22ee1dfb6b24e688247f5cb745349cfd78ccad3c64220e66b local 9a5e26d14c46ef32b0674bbeddd42bbb55836c566947d001880c285f6deca7e4 local 9a48863ae1439b5368187a7642f104f7008af40beb023c34951b4e5082b48cb0 local 9bd392787048b6f7c378d0e88d5170da553f8c5f4f04151c59398d918539c4ad local 21c4376e00fe45bc0abcbee413834634eb99cea1ce99eea73db7e0092876d205 local 22d507151e7598f504e5dcf7b6ac5944f6afa52e0976b7ac1007c70d116e7a4b local 30c9f181a53605046caf5f6fd3c45080ebc399fb577959296f92e63dd5dfb147 local 41c3b7c782fb12874fc5c906d0a17979a5abcae11e4072c07450c579cffe3c92 local 071b4911911711a09d2954f0945b1ddb87b5a4bbbcedb2df864bbf722f0760c5 local 72ac0248d69e9c70b1cbdbcf9fc0410f8a516ebf5ba72879c3947a1be3735004 local 73e664c6e7c4d3f28a4215f7cc17e78400052694352495caa7a717de24c6abe3 local 86f740f73179ec4631f2ada9e54686202dde72a4d721fe77bfa39a39d625e204 local 88a1391610e8f5e2d6056b1ba412268e90d9578126b413119cf8084de00cf495 local 88d77f64f2e9d4e39b5bc874263cd51a33a17cc4bfbef7bf33c7b9e9a4e0bf8f local 93f2094e74824b423cc715622001155e2af3d97e012a27b883187eff220b9c2a local 98f5578cd44673ad6124d70a16cc1b295db9775dc4b4097ce62f2dbf6ed22622 local 99f396591e01ac59cb8834d07e68a5204425c11b4d3c0766bd91f98e3ea02b49 local 145cc854f87294ceda7b6a450c73751f1f39270e922f0dc95138c98f14bcb4c5 local 462a96c7fb653c78b5f84b517da2db9c9051f1a2a0e6dcf695da09f429412fe7 local 543f6ed94fcad8505e8dfd5f60657a1a474648fb985036c84f29c00712adcb25 local 827bf06dc8028761c294105b837268e022295973d13870c1c8d2dc22376f5f15 local 915cea180af089eb8a308fab9faba798b07277499bf1d5e6dd67dc1a2e3b1a0f local 958d2b52ae8307291103175d8b0c50029e1de03b336236500c45e9bf59ea1ac8 local 1933bf82453d2f1014ac080134991fb8ea8c9147c5ac8c8c2861a853fcd0f989 local 3804b52fbfb86491b1938389e46b17001716fc38df656b4852b2368bee55fbf4 local 6527df71b3c0e1127c5226082de50fcb9cf97638cfbbebeeecabdcb1d6cc4e18 local 06564e392208b5c84344fc0efa69dfaa113e5585b7cdd98fda3081f751c3dd8f local 6602abd399b9c3fa1db2432fa277239be26b8029e992de2e95c70329afafa37f local 07510d5caed41a5be7d7f7025163f0e0b589b49b5952bfd254a250aca8325ac3 local 26187a197249a746376a13c4821d543b0dc08f5759dd19de928295dd9f725c23 local 225055f35f5f16c4dd74b1b56adef6115a81c412a833dadf1f6c920824cc12ea local 980566aca92b975170b6e9698de87e252db826bd83204d0eef3c04f84a1bbcf9 local 3860616e8a9ab0c3b0a892c41921c970611755b6bbaaaf53f32dcb4546806522 local a6ecef4b1d6a5c55e702f1ff61f697e287f187fb0b42671a0ccc0a6fe974ae82 local a0791b81aef83a9e7a5818fe094105284c4787546ddc09634e4d29410b69664d local a1978917022e81ac405a76e25b56f7b00d73a1703b213e14a27c39e8f2553688 local ab5ff135c0c1ebba693abe830161f45452edfbb461fd5d7071984c109baea207 local b913e3c1ed43bca6462d4e1c47d3b54069812d21abea9b3531eac3ed633bcf81 local b50761d66b582f0509c919cb5ce1c4aea7c0b2e10b48ba8f9bafd3a69e622926 local b6086892bb93a75bfa07181b61768201c06216d8286f00f3859679b77033faaf local bfd1021a2573067d740eb340bffe03c2c6b63fd3d4c765459bdb36b1b9bd3bda local c0f06c79211d1826f031fd3146f260408748d4ccddc1d37888bb394891fa7040 local c1aec3181a3a0a1dd949115600b675d7e45cf4192ebe55769ebd007b8d962676 local c6e685e43c56f1e51a545d063d313ee72ed60da469573e2faa470634784f8023 local c44c17f2b7ea6e8258e6670602670e47ef818025222e4567010f1999d1073887 local c979c20d44ddd9441dbc8ae92099305aa5b1b29f012d2be1e01ea9d8e9113bd4 local cbf78365413b98f29146b4566d4a3da00d5536363fde348e98da27cea3e10bdf local ce1edb02b21744550d9dba9af8d520854dd7a98265a61149f1a6e9ff5d63ad39 local cfce76e46ccf75899e3b9cf25bb0a23ef21512f7807defa7544fb7395ff60671 local d1e948fd3568659e5e4c6a943640b2980ce42ad61a1a7701dfe0fbc56adf7118 local d7a6ce72768ec6a4a2e4ba507398e60631fb4d41b940ce7cb1caff74004dfb49 local d7a29b90bac94d37ef9997331da5223ebfc67fd97285abb8c6088a62e374e4ed local d9d3e3048d51f345b9326bba47d26e9683a32f4b52569220fae6738e0717b91b local d35e32004f844fe60d7a5e64a0b42ab052eb677a62793a3f72dece9240fa0c1e local d967346abc00bb085dde15632c0969d0b8145996fb4f10bc29f0503f633bee17 local db622f2fba899565c2a5625043e2a5e9a1246b34dc12daa67390b4b6abec2c33 local ddd92106529d1a04def3de441a2f2d7c98c46973abb39b65cd73ee23b35f2f70 local dfd90f6aafce9bb3b1be879ca79130005614e298650fa1ed5f3400d614f083e0 local e3e8ccff796a1f4ab4861663782d00b4d656dd2ace0f37830ea56dcd2c0f8f1e local e6c2d0f39d2960428883f6f4bc2949bce114ac2d46de84a83ef8051e2ed0256d local e880c9b1687f4489005a0b5ed9c5c82931ccc2a27c30d83e7556e5aacfc71edb local ea0b8d625dc02874f33a68960a1763b335379c60c30dbc6e2440182a48b846c7 local ea3c41f21a082667787b4014ea7b7906485f1e663f8801eeda87de637a18111a local ea71ee86227850e1c9ee0cd4f42b171866c4bbe27872a3bc7cecbf0c711f3e6f local ea7257bb2669b969eba2466a8fd2fd3cd39d8d5553ceb6b868c1322869723d1b local eb9135881d2df6e09b447a8c5d9f24d0e6f2bd8e4be1975d5a89fd6abc663283 local ebcf07a0f174d44698a85cef3ff037200b190a15971b1ac53a0c25184721389f local ef40e97ba7e6d92f02939fcf9221dd15be4c30e995a39a727d940bf4c7f3abe0 local fae533140f6a7d28597b8428ef21429b4a2cc36ac42b02f7471d333b2db5a73d local fe6c8a0555697e17522f59960451b9bd8ea548d759b6f31256b13c0a7e14dd96 local fea475a9c34758d52ccc4836e14f0d325eb56d8b037fb94adc8c2826ce0cd2d4 So then I ran `docker volume prune` and it freed 32GB of docker disk space. Anyone have any ideas why I had so many dangling docker volumes?
  13. Hey all, I am having a hard time diagnosing why I am getting these utilization errors. I assume one of my docker containers is writing to an in-image location, but I can't figure out which. Here is what I notice: I have been getting Docker utilization warnings slowly creeping up over the last week "Docker" under "Memory" says 100%: https://i.imgur.com/eOaVJgC.png `htop` says I am only using ~10/64GB of ram: https://i.imgur.com/jyffDgN.png The Docker settings page seems to think I am using 50/50GB: https://i.imgur.com/OHNjzIC.png The "Container Size" button says I am only using ~18 GB of docker image Name Container Writable Log --------------------------------------------------------------------- binhex-krusader 1.92 GB 35.6 MB 4.04 MB lidarr 1.53 GB 866 MB 7.64 MB ESPHome 1.05 GB 3.86 MB 4.20 MB plex 679 MB 313 kB 1.36 MB sonarr 661 MB 33.7 MB 11.8 MB valheim-server 603 MB 2.81 MB 383 kB OpenEats 597 MB 3.13 MB 1.52 MB overseerr 545 MB 2.28 MB 32.2 kB code-server 543 MB 305 kB 11.3 kB NodeRed 456 MB 0 B 251 kB Flaresolverr 444 MB 9.40 MB 669 kB warrior-dockerfile 414 MB 83.1 MB 531 kB gaps 402 MB 44.1 MB 2.21 MB dupeGuru 394 MB 23.6 kB 1.15 MB qbittorrent 390 MB 304 kB 42.8 kB netdata 388 MB 61.4 MB 1.27 MB ffmpeg 380 MB 0 B 26.3 MB jackett 369 MB 109 MB 261 kB mariadb-OpenEats 350 MB 345 kB 13.7 kB pihole-template 349 MB 15.9 MB 217 kB tesla_dashcam_saved_clips 335 MB 16.1 kB 2.64 MB tesla_dashcam_sentry_clips 335 MB 16.1 kB 16.5 MB readarr 322 MB 330 kB 141 kB teslamate_postgres 314 MB 63 B 52.4 kB radarr 308 MB 303 kB 14.6 kB swag 306 MB 10.6 MB 49.2 kB Factorio 288 MB 18.7 kB 92.9 kB openvpn-as 283 MB 22.7 MB 37.7 kB QDirStat 251 MB 23.7 kB 4.41 MB qbittorrentvpn 241 MB 1.38 kB 967 kB rutorrent 218 MB 3.78 MB 76.6 kB tautulli 207 MB 327 kB 3.51 MB teslamate_grafana 194 MB 0 B 125 kB youtube_archive 185 MB 16.7 MB 28.9 MB teslamate 167 MB 3.88 MB 135 kB mylar3 164 MB 11.1 MB 138 kB homelablabelmaker 127 MB 2 B 311 kB thelounge 121 MB 8.44 kB 37.9 kB scrutiny 107 MB 18.8 MB 208 kB cloudflare-ddns 89.2 MB 6.37 kB 45.6 MB statping 58.7 MB 0 B 13.4 MB cyberchef 57.7 MB 1.12 kB 32.3 kB OpenSpeedTest-Server 54.1 MB 2 B 227 kB teslamate_eclipse-mosquitto 9.52 MB 0 B 9.38 kB unpackerr 9.41 MB 0 B 44.6 MB --------------------------------------------------------------------- Total size 17.2 GB 1.36 GB 226 MB So I'm not sure if I am running out of RAM, docker image space, or both? I am on Unraid 6.9 but this started to happen before.
  14. Ok, so I just tested it. Stopping and starting the server: - Corrupts the level.db file (server is no longer joinable, level file is a few MB) - Creates a level.db.backup file that DOES work if manually renamed Any ideas? Basically every time I restart the server I have to restore a backup.
  15. Last night my Valheim world got "corrupted" again after running CA Auto Update. Symptoms: - World file is small (under 1MB) - Cannot join server (server is still broadcasted to steam, joining in-game results in a black screen) - level.db file in backup is also corrupt - level.db.backup file in backup actually works (not sure who/where this file is being created) Any help would be appreciated. I am afraid to run this container much longer because it seems increasingly likely that I'll loose my progress. Is this a problem with the game sever software its self? Or has anyone found a way to fix this corruption/reset issue?