jaylo123

Members
  • Posts

    85
  • Joined

Posts posted by jaylo123

  1. On 2/22/2024 at 8:10 PM, xceph said:

    Today I updated to the latest version offered, and suddenly started getting SQLite errors. Anyone else experiencing these?

     

    [v5.3.6.8612] code = Corrupt (11), message = System.Data.SQLite.SQLiteException (0x800007EF): database disk image is malformed
    database disk image is malformed
       at System.Data.SQLite.SQLite3.Reset(SQLiteStatement stmt)
       at System.Data.SQLite.SQLite3.Step(SQLiteStatement stmt)
       at System.Data.SQLite.SQLiteDataReader.Read()
       at Dapper.SqlMapper.QueryImpl[T](IDbConnection cnn, CommandDefinition command, Type effectiveType)+MoveNext() in /_/Dapper/SqlMapper.cs:line 1178
       at System.Collections.Generic.List`1..ctor(IEnumerable`1 collection)
       at NzbDrone.Core.MovieStats.MovieStatisticsRepository.Query(SqlBuilder builder, String template) in ./Radarr.Core/MovieStats/MovieStatisticsRepository.cs:line 60
       at NzbDrone.Core.MovieStats.MovieStatisticsRepository.MovieStatistics() in ./Radarr.Core/MovieStats/MovieStatisticsRepository.cs:line 30
       at NzbDrone.Core.MovieStats.MovieStatisticsService.MovieStatistics() in ./Radarr.Core/MovieStats/MovieStatisticsService.cs:line 23
       at Radarr.Api.V3.Movies.MovieController.AllMovie(Nullable`1 tmdbId, Boolean excludeLocalCovers) in ./Radarr.Api.V3/Movies/MovieController.cs:line 132
       at lambda_method18(Closure , Object , Object[] )
       at Microsoft.AspNetCore.Mvc.Infrastructure.ActionMethodExecutor.SyncObjectResultExecutor.Execute(IActionResultTypeMapper mapper, ObjectMethodExecutor executor, Object controller, Object[] arguments)
       at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.InvokeActionMethodAsync()
       at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted)
       at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.InvokeNextActionFilterAsync()
    --- End of stack trace from previous location ---
       at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Rethrow(ActionExecutedContextSealed context)
       at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted)
       at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.InvokeInnerFilterAsync()
    --- End of stack trace from previous location ---
    
    2024-02-22 18:08:16,244 DEBG 'radarr' stdout output:
       at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeFilterPipelineAsync>g__Awaited|20_0(ResourceInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)
       at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeAsync>g__Awaited|17_0(ResourceInvoker invoker, Task task, IDisposable scope)
       at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeAsync>g__Awaited|17_0(ResourceInvoker invoker, Task task, IDisposable scope)
       at Microsoft.AspNetCore.Routing.EndpointMiddleware.<Invoke>g__AwaitRequestTask|6_0(Endpoint endpoint, Task requestTask, ILogger logger)
       at Radarr.Http.Middleware.BufferingMiddleware.InvokeAsync(HttpContext context) in ./Radarr.Http/Middleware/BufferingMiddleware.cs:line 28
       at Radarr.Http.Middleware.IfModifiedMiddleware.InvokeAsync(HttpContext context) in ./Radarr.Http/Middleware/IfModifiedMiddleware.cs:line 41
       at Radarr.Http.Middleware.CacheHeaderMiddleware.InvokeAsync(HttpContext context) in ./Radarr.Http/Middleware/CacheHeaderMiddleware.cs:line 33
       at Radarr.Http.Middleware.StartingUpMiddleware.InvokeAsync(HttpContext context) in ./Radarr.Http/Middleware/StartingUpMiddleware.cs:line 38
       at Radarr.Http.Middleware.UrlBaseMiddleware.InvokeAsync(HttpContext context) in ./Radarr.Http/Middleware/UrlBaseMiddleware.cs:line 27
       at Radarr.Http.Middleware.VersionMiddleware.InvokeAsync(HttpContext context) in ./Radarr.Http/Middleware/VersionMiddleware.cs:line 29
       at Microsoft.AspNetCore.ResponseCompression.ResponseCompressionMiddleware.InvokeCore(HttpContext context)
       at Microsoft.AspNetCore.Authorization.Policy.AuthorizationMiddlewareResultHandler.HandleAsync(RequestDelegate next, HttpContext context, AuthorizationPolicy policy, PolicyAuthorizationResult authorizeResult)
       at Microsoft.AspNetCore.Authorization.AuthorizationMiddleware.Invoke(HttpContext context)
       at Microsoft.AspNetCore.Authentication.AuthenticationMiddleware.Invoke(HttpContext context)
       at Microsoft.AspNetCore.Diagnostics.ExceptionHandlerMiddleware.<Invoke>g__Awaited|6_0(ExceptionHandlerMiddleware middleware, HttpContext context, Task task)

     

    Had the same issue. Figured maybe it was a widespread thing.

     

    Nope.

     

    I checked the Radarr logs going back over a month old (max) and every single entry has some message about a malformed DB. No idea when it started, but we're in the same boat: blow it away, start from scratch, assuming your backups are also invalid.

     

    I only wish I could get notified about this - might need to set up a cronjob to troll the logs and shoot me an email alert if the word 'malformed' ever appears so I can jump right on it. I did a cursory check of all my other apps and everything else seems fine. Just this one, on my local storage. /shrug

  2. I've read the above. I still don't understand why this is even required. Responses above have said that this isn't a bug in .8. Yet, this "patch" exists as a thing in our collective universe.  Either this "patch" should be included in UnRAID's base code and a .9 release addresses this, or I should uninstall "Fix Common Problems" / tell it to ignore this issue.

     

    I completely understand and appreciate that none of the above is part of the UnRAID native operating environment, but I also shouldn't be getting nightly reminders that I require a "patch" for something that doesn't impact me at all. And if I install it to quiet my alerts, I then have to remember to uninstall it when it isn't an issue.

  3. On 2/11/2024 at 7:25 PM, Hoopster said:

    Enabling Help on the Cache Dirs plugin provides the answer.  Disable logging if you do not want the .csv file to keep endlessly growing. As noted, it is not automatically rolled.  Or you can delete it and let it start over I suppose

     

    image.thumb.png.4755d0265405ae192d5f01b5481ddbbd.png

    Didn't even think to check there. And I clearly missed that the log file isn't automatically rolled. I'll add to logrotate/tmpwatch/whatever Unraid uses (I float between different distros at work and can't remember what Unraid uses lol).

     

    Thanks again!

  4. On 7/30/2023 at 11:47 AM, dysfunktionalsd said:

    Thank you! 

    Ended up going the VMDK route and got it to go through instead. I appreciate the help! 

    I know this is an older post now, but don't forget to go back and reset the vDisk bus from SATA to VirtIO! This significantly increases read/write speeds on disks in VMs managed by KVM.

  5. Found it. 

     

    /boot/config/plugins/dockerMan/templates-user/<container name>.xml

     

    Update it, restart docker via Settings -> Docker -> Enable Docker: No

    Save

    Set Enable Docker: Yes

     

    I guess this is the correct method:

    Apps -> Previous Apps -> Reinstall -> Remove offending entry -> Success

  6. Hello -

     

    I updated an application in docker with an invalid configuration setting. I meant to add a port to a container's configuration but forgot to change the setting from 'volume' to 'port' in the web gui. Now the container is gone, with an orphan image. Reinstalling from 'previous apps' produces the same issue.

     

    I'd prefer to not wipe/reload the entire container and start over from scratch because the configuration is quite tweaked overall. 

     

    I've tried to locate the file in the OS but I'm not having much luck. I was hoping to just remove the last update I did to the config from the configuration file / template but I cannot seem to locate the data. I've checked the docker.img mount point under /var, appdata and /boot.

     

    Where would one go to quickly edit the config so I can fix my 'uh-oh'? I'm fine once I know where the app configurations are stored and can figure it out from there, I just need some guidance on where to look. Thanks in advance!

  7. 40 minutes ago, JonathanM said:

    If this rig is running unattended I highly recommend upgrading to a normal CPU cooler that doesn't rely on pumping liquid to keep the CPU from overheating. The worst that happens when a normal heatsink and fan combo quits is slightly elevated temperatures, slowly ramping up as the convection air keeps passing over the heavy block of copper and aluminum fins, and the system fans that are still running keep the box from melting down.

     

    Worst case scenario with liquid cooling is ruined circuit boards below where the liquid leaked accompanied by a rapid overheating.

     

    Desktop gaming rigs that are always within reach are great for liquid cooling, servers not so much.

    Very excellent point. The server is actually within reach, but of course it is also always on and I'm not always at home.

     

    I did slyly purchase this setup with the intent of turning it into a gaming rig and grabbing a Rosewill chassis or whatever and moving my server duties to that. I built the system in 2020 and the last component I was missing was a GPU. Well, Etherium put a stop to those plans.

     

    I did have an M60 available from work that I used up until 6 months ago for vGPU duties but ... eww. Maxwell.  It got the job done though for vGPU passthrough for my needs, which mostly consisted of either Plex transcoding (I've since flipped to QuickSync from Intel) or a VM running a desktop so my daughter could play Roblox / Minecraft / whatever remotely using Parsec. And Maxwell was the last generation of GRID GPU that didn't require a license server. That all started with Pascal and continues on with Turing, Ampere and now (I assume) Lovelace.

     

    Now, however, E-VGA GeForce GPUs are about to go on a firesale so maybe I can continue with those plans - albeit 2.5 years later. I already have my half-height rack in my checkout box on Amazon but I'm still struggling with a case that meets my needs.

  8. 6 hours ago, trurl said:

    Were you writing to a cached share? If writing directly to the array parity still has to be updated so single disk speed isn't possible. Turbo write can help some.

    Nope. I did have the parity drive operational and the array online, other than the single disk that I removed from the array. That's why /mnt/user/ was my <dest> path, and I was getting those speeds. And the amount of data being transferred was 6x the size of my SSD cache disk. My signature has the drives I use, which are Enterprise Seagate Exos disks. I guess their on-disk cache is able to handle writes a bit more efficiently than more commodity drives? /shrug - but 240MB/s is the maximum for spinning disks w/o cache and I assume the writes were sequential.

     

    6 hours ago, Squid said:

    Write speeds by default are always going to be significantly slower than the raw speeds.

     

    This is because the default write mode goes like this for any given sector on the device

     

    You can use "Turbo Write Mode" (ie: Reconstruct write) which will pretty much write at the full speed of the drive (subject to bandwidth considerations), but at the expense that all drives will have to be spinning.

    Oh that's interesting (Turbo Write Mode). That ... probably would have been beneficial for me! But I got it done in the end after ~9 hours. Of course, as a creature of habit, I re-ran the rsync one more time before wiping the source disk to ensure everything was indeed transferred. 

     

    I didn't measure IOPS but I'm sure they would have been pretty high. I just finished benchmarking a 1.4PB all-flash array from Vast Storage at work and became pretty enamored with the tool elbencho (a pseudo mix of fio, dd and iozone, with seq and rand options - and graphs!) and after spending basically 4 weeks in spreadsheet hell I wasn't too interested in IOPS - I just needed the data off the drive as quickly as possible :).  That said, making 16 simultaneous TCP connections to an SMB share and seeing a fully saturated 100GbE line reading/writing to storage at 10.5GB/s felt pretty awesome!

     

    For anyone interested in disk benchmarking tools I highly recommend elbencho as another tool in your toolkit. The maintainer even compiles a native Windows binary with each release. Take a look! 

     

    breuner/elbencho: A distributed storage benchmark for file systems, object stores & block devices with support for GPUs (github.com)

     

    41 minutes ago, Marshalleq said:

    @jaylo123It's a known fact that Unraid's 'Unraid' array has dawdling speed.  There is no workaround for this.  The only solution I can think of (which I have done) is to not use the unraid array.  So pretty much on unraid that means use ZFS array.  From experience the speed increase was notable. -add to that the remainder of benefits and (to me at least) it's a no brainer.

     

    However, despite being very well implemented into unraid, you would need to be comfortable with the command line to use it and be prepared to do some reading on how it works.  So it isn't for everyone and I'm not trying to push you one way or the other.  I'm just saying the 'unraid' array is known to be extremely slow.

    Oh certainly and yes, I knew there was a performance hit using the unraidfs (for lack of a better term) configuration/setup. And agreed too, eschewing the UnRAID array entirely and hacking it at the CLI to set up ZFS is a route one could do. But at that point, it would be better to just spin up a Linux host and do it yourself w/o UnRAID. Or switch over to TrusNAS. The biggest draw for me to UnRAID was/is the ability to easily mix/match drive sizes and types into one cohesive environment.

     

    I guess I was really just documenting how I was able to achieve what OP was trying to do for future people stumbling across the thread via Google (because that's how I found this post).

  9. On 8/14/2022 at 10:26 AM, SmartPhoneLover said:

    First of all, sorry to all of you about not updating the template. I was having an issue with my email client days ago, and I was not checking the forums for changes. Now it's updated with the latest changes.

     

    @bobby @RyanServer711 @spamenigma @kri kri

    The Docker Socket path is now added to the unRAID template. The template will be updated in a couple of hours since right now.

     

    @jaylo123

    It's a very good suggestion. I include myself on this. I know that I have many docker templates that are not updated since I have published them, but I try to find some time to work on them, even very slowly. Sorry about that.

    No need to apologize!  DEFINITELY appreciate the work you've done! I was probably a bit too discerning myself. I've just seen soooo many apps on  CA sit there in various states of functionality (even 'Official' ones) so I kind of soapboxed a bit. This thread was probably the wrong place for it.

    Your work here is great and I actually use your contribution for Homarr in CA every day! Cheers

    • Like 1
  10. On 12/5/2018 at 7:57 AM, jonp said:

    Yeah, Johnny Black is literally giving you the exact reason why write speeds when transferring data from disk to disk INSIDE the array are slow.  It's because for each write (either deleting from the one disk or writing to the other), it has to update the parity disk for BOTH transactions.  That's why writing to disks in the array are fast when the source ISN'T on the array.

    Yep. I came across this thread (now years later via google) because the speeds were horrible. I didn't even consider taking the disk I want to remove out of the array.  I stopped the array, went to Tools -> New Config and set everything up the same way except I didn't add the disk I wanted to repurpose outside of UnRAID.

     

    When the disk was in the array I was getting ~50-60MB/s using rsync on the command line while transferring large (5+GB) files.  After I removed the disk from the array, restarted the array w/o the disk, then SSHed into the UnRAID system and manually mounted the disk I wanted to remove and re-ran the same rsync command, I was getting ~240MB/s. Which is the maximum my spinning disks can do for R/W ops. I would expect a destination array setup using SSDs to also reach their theoretical maximum throughput, depending on your data of course (block size, small files vs large files, etc).

     

    It meant the difference between a 32 hour transfer and just over a 9 hour transfer for 7TB of data.

     

    Steps I used, hopefully someone else that finds this thread via a Google search like I did finds it useful. Full warning: The below is only for people that understand that UnRAID Support on these forums will only help you as a 'best effort' and you are comfortable with the command line. There is no GUI way of doing this. You've been warned (though, that said, this is fairly easy and safe but since we are "coloring outside of the lines", BE CAREFUL).
     

    After removing the drive from the array via Tools -> New Config and starting the array without the drive, manually updating all shares to a configuration where the mover will not run, and assuming /dev/sdf1 is the drive you want to remove, install 'screen' via the Nerd Pack plugin, launch a console session (SSH or web console via the GUI - either works) and type:

     

    # Launch Screen
    root@tower# 'screen'
    # Create mount point for drive you want to remove data from
    root@tower# 'mkdir /source'
    # Mount the drive you want to remove data from
    root@tower# 'mount /dev/sdf1 /source'
    # Replicate the data from the drive you intend to remove TO the general UnRAID array
    root@tower# 'rsync -av --progress /source/ /mnt/user/'
    # --progress is optional, it just shows speed and what is happening
    # Press 'CTRL+A, then the letter D' to DETACH from screen if this is a multi-hour process or you need to start it remotely and want to check on it later easily.
    # Press 'CTRL+K, then the letter Y' to kill the entire screen session. Note that this WILL stop the transfer whereas 'CTRL+A, then D' will not.
    #
    # To reconnect, SSH back into the system and type:
    root@tower# 'screen -r'

    (wait for rsync to complete)

    root@tower# umount /source
    root@tower# rmdir /source
    # IMPORTANT: If either of the above commands fail, you have ACTIVE processes that are using the drive you want to remove.
    # Unless you know what you're doing, do not proceed until the above two commands work without any warnings or errors.

    Shut down server, remove drive, turn server back on, and change the shares that were modified at the start of this process to their original state so mover will run once again.

     

    Why use screen? You can certainly do this without screen, however if you don't use screen and you get disconnected from your server during the transfer (WiFi goes out, you're in a coffee shop, etc), your transfer will stop. Obviously this is not an issue if you're doing this on a system under your desk. But even then, it is probably still a good idea. What if the X session crashes while you're booted into the GUI? Screen does not care - it will keep going, and you can reattach to it later to check on the progress.

     

    I did try to use the Unbalance plugin in conjunction with the Unassigned Drives plugin so that the drive I wanted to copy data FROM was not in the array, however Unbalance doesn't work that way - at least not that I could dig up.

  11. On 8/8/2022 at 1:07 PM, kri kri said:

    That seems a bit harsh. CA apps are published by volunteers. 

     

    @SmartPhoneLover can you add the above to the CA apps config? 

    Well, I can see both sides. I've certainly also abandoned perfectly working apps because of similar issues. While they are maintained by volunteers, it would seem that in at least some cases (especially with lesser-known apps) the volunteer in question just abandons it and it languishes.

     

    Maybe it's a larger discussion for CA where the program requires a twice-a-year check-in from the volunteer or the app gets tagged with something like 'possibly no longer maintained' so people browsing the store know that they may have issues.

     

    Folks like binhex are notoriously reliable, but "johnsmith1234" with one app may publish it once and then never re-evaluate it for any required updates ever again even though the docker version it was created against is now 8 major releases ahead, yet the CA offers it as if it works just fine out of the box.

     

    If the volunteer doesn't check in within a 'renewal month' check or something - maybe twice a year? - it gets an 'unmaintained' flag so the community knows they may have issues and/or need to have deeper Linux and Docker knowledge if they wish to use the app.

     

    If the volunteer wishes to remove the flag, they just go to their app and maybe check a box that says, "This app has been validated to still be functional" or something.

     

    netbox by linuxserver is a perfect example of this. It gets you around 70% there but unless you're fine dabbling with command line edits of an application you've never messed with and know where to look for the right log files to debug / troubleshoot, you're just going to give up after a few minutes. Just thinking out loud, certainly not knocking anyone in particular but I do think some additional QC of CA wouldn't be too much.

  12. Hi folks. I know it's an old thread but just sharing in case anyone else from Google searching ends up here.

     

    This seems to have done the trick for me (note that I'm not running Hyper-V or Windows in my VM so I cannot confirm on Hyper-V):

     

    https://stafwag.github.io/blog/blog/2018/06/04/nested-virtualization-in-kvm/

     

    Specifically, editing the VM XML and changing the cpu mode section with this:

    <cpu mode='host-model' check='partial'>
       <model fallback='allow'/>
    </cpu>

     

    Of course, you also need to ensure the intel-kvm.nested=1 change is applied to your grub config, the first step OP mentioned. The link I shared shows how you can do this without rebooting as well. You can also of course just add the change to /boot/config/modprobe.d/<filename> as mentioned in the linked article (in Unraid, modprobe.d is in this location).

     

    My VM (proxmoxtest in this case) detects vmx as a CPU feature now, and the XML was updated automatically with all of the features:

     

    root@mediasrv:~# virsh dumpxml proxmoxtest|grep feature
      <features>
      </features>
        <feature policy='require' name='ss'/>
        <feature policy='require' name='vmx'/>
        <feature policy='require' name='pdcm'/>
        <feature policy='require' name='hypervisor'/>
        <feature policy='require' name='tsc_adjust'/>
        <feature policy='require' name='clflushopt'/>
        <feature policy='require' name='umip'/>
        <feature policy='require' name='md-clear'/>
        <feature policy='require' name='stibp'/>
        <feature policy='require' name='arch-capabilities'/>
        <feature policy='require' name='ssbd'/>
        <feature policy='require' name='xsaves'/>
        <feature policy='require' name='pdpe1gb'/>
        <feature policy='require' name='ibpb'/>
        <feature policy='require' name='ibrs'/>
        <feature policy='require' name='amd-stibp'/>
        <feature policy='require' name='amd-ssbd'/>
        <feature policy='require' name='rdctl-no'/>
        <feature policy='require' name='ibrs-all'/>
        <feature policy='require' name='skip-l1dfl-vmentry'/>
        <feature policy='require' name='mds-no'/>
        <feature policy='require' name='pschange-mc-no'/>
        <feature policy='require' name='tsx-ctrl'/>
        <feature policy='disable' name='hle'/>
        <feature policy='disable' name='rtm'/>
        <feature policy='disable' name='mpx'/>

     

  13. 15 hours ago, binhex said:

    It's not a duplicate guys and has been like this for a long time, it's UDP and TCP for the port:-

    https://github.com/binhex/docker-templates/blob/42c31fcabb842e1b51e26935a6a9198a857f2f3e/binhex/delugevpn.xml#L41-L50

    Just to be clear, that port can only used if the VPN is turned off, so changing that in any way has no effect when running with the VPN turned on

     

    Oh, interesting. That would explain how the same port # was listed twice and yet the container still started - one for TCP and one for UDP. I normally just set up my container (in this case, your delugevpn was set up in 2018), ensure it works and only check support threads like this one if I notice an issue. I would assume many others do the same.

     

    I do have daily updates of containers enabled. I guess a container update sometime early the week of July 3rd created that extra entry. My VPN is never turned off.

     

    I just know that whatever was recently added created a duplicate "Host Port" entry and it prevented Deluge from functioning normally (VPN and Privoxy and all other components worked fine). Removing it resolved the issue. And it seems others are having some kind of similar issue connecting to trackers.

     

    Also, just wanted to say thanks for how you keep your containers updated and your dedication to the community here!

  14. On 7/6/2022 at 1:05 PM, Antimarkovnikov said:

    I'm having a issue with Deluge VPN.  Right now all my torrents are not updating their trackers and are sitting at no seeders or peers.  I use PIA and have tried connecting to different ovpn servers and haven't had any luck.  Here's the current log for the docker.

     

    2022-07-05 13:21:26,373 DEBG 'watchdog-script' stdout output:
    [info] Deluge key 'outgoing_interface' currently has a value of 'tun0'
    [info] Deluge key 'outgoing_interface' will have a new value 'tun0'
    [info] Writing changes to Deluge config file '/config/core.conf'...
    
    2022-07-05 13:21:26,854 DEBG 'watchdog-script' stdout output:
    [info] Deluge key 'default_daemon' currently has a value of '51cb5aaae02ac52ca9d32c0434a98e12d4fac639'
    [info] Deluge key 'default_daemon' will have a new value '51cb5aaae02ac52ca9d32c0434a98e12d4fac639'
    [info] Writing changes to Deluge config file '/config/web.conf'...
    
    2022-07-05 13:21:27,589 DEBG 'watchdog-script' stdout output:
    [info] Deluge process started
    [info] Waiting for Deluge process to start listening on port 58846...
    
    2022-07-05 13:21:27,923 DEBG 'watchdog-script' stdout output:
    [info] Deluge process listening on port 58846
    
    2022-07-05 13:21:32,721 DEBG 'watchdog-script' stdout output:
    Setting "random_port" to: False
    Configuration value successfully updated.
    
    2022-07-05 13:21:32,721 DEBG 'watchdog-script' stderr output:
    <Deferred at 0x146235b76d10 current result: None>
    
    2022-07-05 13:21:37,825 DEBG 'watchdog-script' stdout output:
    Setting "listen_ports" to: (44187, 44187)
    Configuration value successfully updated.
    
    2022-07-05 13:21:37,825 DEBG 'watchdog-script' stderr output:
    <Deferred at 0x14ffe7286b60 current result: None>
    
    2022-07-05 13:21:42,935 DEBG 'watchdog-script' stderr output:
    <Deferred at 0x155476456c20 current result: None>
    
    2022-07-05 13:21:43,122 DEBG 'watchdog-script' stdout output:
    [info] No torrents with state 'Error' found
    
    
    2022-07-05 13:21:43,123 DEBG 'watchdog-script' stdout output:
    [info] Starting Deluge Web UI...
    
    2022-07-05 13:21:43,123 DEBG 'watchdog-script' stdout output:
    [info] Deluge Web UI started
    
    2022-07-05 13:21:43,126 DEBG 'watchdog-script' stdout output:
    [info] Attempting to start Privoxy...
    
    2022-07-05 13:21:44,135 DEBG 'watchdog-script' stdout output:
    [info] Privoxy process started
    [info] Waiting for Privoxy process to start listening on port 8118...
    
    2022-07-05 13:21:44,142 DEBG 'watchdog-script' stdout output:
    [info] Privoxy process listening on port 8118
    
    2022-07-05 13:35:55,070 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-05 13:48:33,559 DEBG 'watchdog-script' stderr output:
    Unhandled error in Deferred:
    
    
    2022-07-05 13:48:33,566 DEBG 'watchdog-script' stderr output:
    
    Traceback (most recent call last):
    File "/usr/lib/python3.10/site-packages/twisted/internet/defer.py", line 858, in _runCallbacks
    current.result = callback( # type: ignore[misc]
    File "/usr/lib/python3.10/site-packages/twisted/internet/defer.py", line 1338, in _cbDeferred
    self.callback(cast(_DeferredListResultListT, self.resultList))
    File "/usr/lib/python3.10/site-packages/twisted/internet/defer.py", line 662, in callback
    self._startRunCallbacks(result)
    File "/usr/lib/python3.10/site-packages/twisted/internet/defer.py", line 764, in _startRunCallbacks
    self._runCallbacks()
    --- <exception caught here> ---
    File "/usr/lib/python3.10/site-packages/twisted/internet/defer.py", line 858, in _runCallbacks
    current.result = callback( # type: ignore[misc]
    File "/usr/lib/python3.10/site-packages/deluge/ui/web/json_api.py", line 187, in _on_rpc_request_failed
    return self._send_response(request, response)
    File "/usr/lib/python3.10/site-packages/deluge/ui/web/json_api.py", line 229, in _send_response
    response = json.dumps(response)
    File "/usr/lib/python3.10/json/__init__.py", line 231, in dumps
    return _default_encoder.encode(obj)
    File "/usr/lib/python3.10/json/encoder.py", line 199, in encode
    chunks = self.iterencode(o, _one_shot=True)
    File "/usr/lib/python3.10/json/encoder.py", line 257, in iterencode
    return _iterencode(o, 0)
    File "/usr/lib/python3.10/json/encoder.py", line 179, in default
    raise TypeError(f'Object of type {o.__class__.__name__} '
    builtins.TypeError: Object of type Failure is not JSON serializable
    
    
    2022-07-05 13:48:33,566 DEBG 'watchdog-script' stderr output:
    Unhandled error in Deferred:
    
    
    Traceback (most recent call last):
    Failure: deluge.error.AddTorrentError: Torrent already in session (31f39e3afe7c418ea6c10ea1b650e7d6d00680cf).
    
    
    
    2022-07-05 13:50:55,508 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-05 14:05:55,940 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-05 14:20:56,368 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-05 14:35:56,819 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-05 14:50:57,248 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-05 15:05:57,681 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-05 15:20:58,114 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-05 15:35:58,547 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-05 15:50:58,982 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-05 16:05:59,411 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-05 16:20:59,842 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-05 16:36:00,280 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-05 16:51:00,722 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-05 17:06:01,162 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-05 17:21:01,598 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-05 17:36:02,030 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-05 17:51:02,466 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-05 18:06:02,906 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-05 18:21:03,339 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-05 18:36:03,769 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-05 18:51:04,211 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-05 19:06:04,651 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-05 19:21:05,082 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-05 19:36:05,508 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-05 19:51:05,942 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-05 20:06:06,374 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-05 20:21:06,802 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-05 20:36:07,231 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-05 20:51:07,671 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-05 21:06:08,107 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-05 21:21:08,541 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-05 21:36:08,975 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-05 21:51:09,409 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-05 22:06:09,831 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-05 22:21:10,268 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-05 22:36:10,687 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-05 22:51:11,108 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-05 23:06:11,538 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-05 23:21:11,967 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-05 23:36:12,400 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-05 23:51:12,830 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 00:06:13,259 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 00:21:13,688 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 00:36:14,117 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 00:51:14,557 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 01:06:14,992 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 01:21:15,428 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 01:36:15,858 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 01:51:16,292 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 02:06:16,721 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 02:21:17,155 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 02:36:17,582 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 02:51:18,014 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 03:06:18,453 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 03:21:18,909 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 03:36:19,358 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 03:51:19,801 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 04:06:20,239 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 04:21:20,683 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 04:36:21,120 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 04:51:21,553 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 05:06:21,985 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 05:21:22,417 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 05:36:22,850 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 05:51:23,289 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 06:06:23,717 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 06:21:24,144 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 06:36:24,580 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 06:51:25,015 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 07:06:25,443 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 07:21:25,871 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 07:36:26,310 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 07:51:26,742 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 08:06:27,191 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 08:21:27,624 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 08:36:28,062 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 08:51:28,496 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 09:06:28,925 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 09:21:29,357 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 09:36:29,791 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 09:51:30,227 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 10:06:30,658 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 10:21:31,089 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 10:36:31,525 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'
    
    2022-07-06 10:51:31,956 DEBG 'start-script' stdout output:
    [info] Successfully assigned and bound incoming port '44187'

     

     

    I'm battling what seems to be the same issue. Using PIA and OpenVPN. I've switched OVPN .conf files for endpoints but no luck.

     

    The /mnt/user/appdata/binhex-delugevpn/deluged.log file states this in every container restart. These are the only log entries I can see in either my container logs or in my pfsense logs that are pointing to any sort of error.  pfsense actually doesn't show anything being blocked and I haven't made any firewall changes in ... jeez, months:

     

    [19:57:15 [INFO    ][deluge.core.rpcserver         :179 ] Deluge Client connection made from: 127.0.0.1:42028
    19:57:15 [INFO    ][deluge.core.rpcserver         :205 ] Deluge client disconnected: Connection to the other side was lost in a non-clean fashion: Connection lost.
    19:57:16 [INFO    ][deluge.core.rpcserver         :179 ] Deluge Client connection made from: 127.0.0.1:42030
    19:57:17 [INFO    ][deluge.core.rpcserver         :205 ] Deluge client disconnected: Connection to the other side was lost in a non-clean fashion: Connection lost.
    19:57:18 [INFO    ][deluge.core.rpcserver         :179 ] Deluge Client connection made from: 127.0.0.1:36762
    19:57:18 [INFO    ][deluge.core.rpcserver         :205 ] Deluge client disconnected: Connection to the other side was lost in a non-clean fashion: Connection lost.
    19:57:19 [INFO    ][deluge.core.rpcserver         :179 ] Deluge Client connection made from: 127.0.0.1:36770

     

    I've changed from Austria to Switzerland and still the same result. But I don't think the VPN is the issue here. The startup logs indicate that the VPN is working. The startup logs indicate that amazonaws returns an address that I expect (meaning, not my own). The container itself has internet access.

    Deluge accepts download requests from Prowlarr, etc.

     

    Deluge just ... never actually starts downloading anything.

     

    This all started in the last few days. Was working fine last week (and indeed, for years up until maybe this past weekend).

     

    UPDATE - RESOLVED

    I have no idea how or why, but for some reason my binhex-delugevpn container ended up with TWO port definitions pointing to port 58946. The container definitions explicitly called out each line as "Host Port 3" as 'Port 58946' and "Host Port 4" with the same port address. Normally this does not allow the container to start because of a port conflict. Yet, somehow, it did.

     

    I removed the "Host Port 4" line in my docker compose instructions via the UnRAID web GUI and now things are ticking along just fine. Note that you should NOT do this unless you see a duplicate entry and 'Host Port 4' is a duplicate of 'Host Port 3'.  Curious that the container still started with duplicate port entries in the same container and Docker didn't at least throw a WARN.

     

    I ... I swear I didn't do this myself. I don't run any docker-compose scripts or commands outside of the UnRAID GUI. I tinker on other systems. I largely leave this server sitting in my network closet and only connect to it to add things to the DL list.

     

    Like Anton from the TV Show 'Silicon Valley', my server:

    Quote

    "Grimly does his work, then he sits motionless until it's time to work again. We could all take a page from his book."

     

    Anyway, if anyone else here is having issues with this container downloading files yet otherwise successfully running, take a look in your configuration and ensure you don't have any extra "Host Port" lines or other duplicate entries.

    • Thanks 1
  15. On 10/26/2021 at 1:18 PM, jxjelly said:

    For other people looking for the answer without having to click through. 

     

    It's 3 failed attempts in a 15 minute interval

    Great. I fat fingered my login because my password locker wasn't available at the time.

     

    This isn't seeing the forest for the trees. The Web UI wouldn't be a vector of attack. SSH is already open - this is where attackers would focus their efforts in a serious security breach. Well, maybe the web ui could be used for a 'bobby tables' type of situation.

     

    exploits_of_a_mom.png

     

    Sigh. I guess it would be a vector of attack... (yes I just literally talked myself out of my own argument)

    • Haha 1
  16. On 4/20/2019 at 2:48 PM, trurl said:

    No. And I have never heard any good argument for "balancing the load out".

    Well, this may come back to bite ya. Yes, there could be reasons to 'balance the load out'. I know this is 3 years old, but I was looking up another issue for clearing out my cache disk and wiping/formatting to XFS from BTRFS and while that's happening this comment caught my eye.

     

    I could sit here and say the same thing, in a sense. "I have never heard any good argument for *not* "balancing the load out"".

     

    I suppose on a technical level, without having much understanding about how the UnRAID FUSE FS works under the hood, sure. Maybe its fine to frontload a bunch of drives with data and default to a high-water setup. But from an end user perspective (read: optics), it gives a sense of comfort in knowing that your disks are being used efficiently. Even if you and I know it doesn't mean that on the technical side.

  17. Suggestion, or maybe if there is a way to do this and I didn't see it let me know:

     

    A flag (checkbox in the UI) to ignore sending alerts if there are docker updates.

     

    Netdata, for an example, has updates just about every day. And I auto-update my containers overnight automatically.  Yet, since there is a discrepancy between when this plugin runs its scans (or detects updates in Docker that are available), I get beeped every morning around 3 AM or so from my phone. 95% of the time, it's from this plugin, and its because I have an update available for some container, which will be auto-updated within 24 hours anyway.