Seltonu

Members
  • Posts

    13
  • Joined

  • Last visited

Everything posted by Seltonu

  1. Update for anyone who stumbles across this. I updated both of my samsung 980's at the same time using the ISO samsung provides booted off of a flash drive. This completely fixed the overheating issue for me, I haven't gotten a single warning in a few days when usually I get a few a week. I did run mover first just in case but didn't bother to backup my dockers and took the risk. At first my dockers were broken after rebooting, but I ran mover again, and rebooted again, and they work fine. Seems like no data was lost, but if you have docker data you're not willing to lose then definitely back it up because that first reboot with them broken was a bit scary (they wouldn't launch, and said something about "image couldn't be deleted" if I remember correctly). All in all, fairly painless, and very easy/quick.
  2. @jbuszkie wondering if you ran into the issue since updating the firmware? if not, I'll be updating my firmware too. This has been driving me crazy
  3. This seems to have worked for me, posting here in case anyone stumbles across this: https://linuxize.com/post/how-to-mount-cifs-windows-share-on-linux/ I then used Flatseal to add access to the /mnt/photos directory to Dartkable.
  4. I have an SMB share where I store my photos, and would like to be able to import them in the Darktable flatpak. However, I can't seem to find almost any information on how to accomplish this. The path in terminal on my PC looks like /run/user/1000/gvfs/smb-share:server=mynas.local,share=photography Adding this path via flatseal doesn't work, and Darktable can't see any photos in here when trying to import the folder via the Gnome file picker
  5. Hello! Hopefully this is the right place to ask - is it possible to set up "ionice" settings for dockers so that I can prioritize IO operations for some applications (i.e. media, game servers, etc.) and lower the priority of others (i.e. torrent application)? I have a cache drive but this is not a great solution for me as the majority of my media library is on spinning disk. Lots of r/w operations can cause my media server to buffer heavily, etc. so I would really like to be able to set IO priority.
  6. Ah my bad, this is my first time using dockers for anything so I didn't realize it may not be a simple fix/implementation. I've set it to not auto-update as you and wgstarks suggested, this is an easy compromise for one docker. Thank you!
  7. A CA auto-update problem I've found that I actually noted in my previous comment, is when a container is updated that other dockers rely on for networking, the reliant containers stay down until manually visiting the docker tab which triggers a rebuild. Example / Steps to reproduce: GluetunVPN is a frequently updated VPN docker to router other dockers through. Setup with VPN info. qbittorrent is an example docker to route through GluetunVPN (Extra parameters: --net=container:GluetunVPN) Using CA auto-update, wait for GluetunVPN to update (every ~2-3 days). After update, qbittorrent docker is down until manually visiting the docker tab. When at docker tab, any routed containers will automatically trigger a "rebuilding" state and go back up. Expected behavior: Any "child" docker containers routing their network through a "parent" docker container will automatically trigger rebuilds after the "parent" is auto-updated from CA auto update. This is especially frustrating because of how often some "parent" containers like GluetunVPN push updates, thus needing to manually visit the docker tab every 2-3 days to trigger the rebuild CA auto update didn't.
  8. Seconding the issue, however unlike darrenyorston I am completely unable to reinstall the GluetunVPN container. The container disappeared for me after an update this morning, and fails with an error on reinstall. I reported on github here but seems I should have reported here first instead. Clearing the old files from my appdata folder made no difference. This issue is urgent for me because I'm unable to reinstall the container, thus my dockers relying on GluetunVPN are down until then. vault-diagnostics-20220115-2338.zip
  9. EDIT: This was a Gluetun template issue which has now been fixed, please ignore. CA Auto Update broke my setup a bit today. I had a VPN docker (GluetunVPN) set up that a couple other dockers run through (--net=container:GluetunVPN) , and when the VPN was auto-updated this morning it broke/removed the Gluetun install, which left a few other dockers stuck in a perpetual "rebuilding" stage looking for the VPN to route through. This has never been an issue manually clicking update on Gluetun, but this was the first time it auto updated via the plugin. I've attached diagnostics file.
  10. Oddly enough... today I can't reproduce this bug even with a reboot. The only thing I can think is that the mover had the chance to run again (though it was run yesterday after installing the cache drive), so maybe it was related to some files not getting properly migrated? But like I said the share was even deleted/recreated with Cache Only... I'm unsure if I should close this bug, I'll attach system diagnostic just in case. Apologies for not having a diagnostic from while the bug was occurring, I don't know if this will be helpful vault-diagnostics-20211219-1605.zip
  11. When setting up a minecraft server through TQ's docker (highest download count under "minecraft" in CA), when the data path is set to /mnt/user/minecraft/ the minecraft server is unusably slow and crashes 60 seconds after a user tries to break a block (server tick timeout). Server works perfectly when this value is changed to /mnt/cache/minecraft/. Completely reproducible for me after remove/reinstall, and reboot. Expected behavior is that either path value should work. The share "minecraft" is an empty share set to Cache Only. Also reproducible even after deleting and recreating the share. Hopefully someone else can try to reproduce this, I won't be able to test much starting tomorrow as users will be on the server. Please let me know if there's any information to add, I'm brand new to unraid!