Chaos_Therum

Members
  • Posts

    19
  • Joined

  • Last visited

Everything posted by Chaos_Therum

  1. So I'm running into some strange issues, I recently migrated from unionfs to mergerfs so far it seems like the actual file browser responsiveness is far better. But I'm running into issues with files appearing corrupt, or my software just not seeing them. From what I can tell the files are not actually corrupt they open and play fine but stuff like metadata isn't showing up properly. For example I'm telling MediaMonkey to scan my music collection and it's picking up maybe 10 to 15 files at a time then seeing that files aren't available even though I can play them. I'm assuming this is some sort of timeout issue but I didn't have any issues while using the unionfs system just folders that wouldn't delete. I'm also getting weird permissions issues only for them to go away after a refresh of the folder my main system is Windows. Has anyone else ran into issues like this I've looked around but I haven't found anything. I'm not sure what other info to provide please let me know if there is anything else you would need to know.
  2. Not totally sure where I would start with something like this but you could probably code up a userscript to display a run status. I would have to imagine all this info is available to be displayed.
  3. Hrm that makes sense. I guess it's just something wrong with qbittorrent itself then. When I said new update I meant for the docker not qbittorrent. Thanks for all the hardwork half of the dockers I have running are yours and I couldn't get a docker with a vpn working to save my life.
  4. Well an update just came through let's hope it solves this issue. I'm thinking it has something to do with the way the docker shuts down I think it's an unclean exit which qbittorrent isn't handling gracefully. Though it could just be down to the massive refactoring the webui is going through right now. Without automatic torrent management it kind of defeats the entire purpose of using qbittorrent for me at least.
  5. So I'm running into the issue that everytime I restart the docker my savepaths for all of my torrents resets to /config/qbittorrent. And automatic torrent management is being disabled. I think this might be down to qbittorrent not handling improper shutdown very gracefully. I'm assuming that qbit is just killed when you restart the docker. Is there any solution to this?
  6. It's definitely something to do with the Deluge container that causes my issues. I can run Deluge on my desktop no problem something about the container my network doesn't like. But this is the wrong page for this. As I said I was misremembering it definitely wasn't the qbittorrent docker that was causing me issues.
  7. I've been running into this issue as well. Personally qbittorrent is my favorite client and it kills me that the one good docker with built in vpn doesn't work. I wonder if it's something with our internal networks? It's super annoying I was getting the same stuff except my lower end packet loss was much higher I was probably closer to 20-80% Wait I actually take back this statement it was the delugevpn container that was causing me issues. Still strange that we would get the same issue on two different containers.
  8. So I just got back from out of town. I had the dd cloning the disks while I was gone. I'm having some issues trying to mount it. I tried this command. losetup —partscan -f /mnt/disk1/bkp.img I'm getting this output. losetup: —partscan: failed to set up loop device: No such file or directory I've tried modprobe I'm not exactly sure what I'm doing. Basically just tossing in commands.
  9. One more quick question. Couldn't I do ddrescue to a file rather than direct to the drive. I know I've used regular old dd to output to a file that was then mountable. I don't have quite enough space to clone to a bare drive then move the files over. This caught me with my array nearly full.
  10. Okay thanks for the help. So I guess the best way to go about this would be to run ddrescue from one old drive to the new one I get transfer that data to the array then repeat that process? This sucks, first drives I've ever had fail on me in over 12 years of data hoarding.
  11. Would you recommend keeping the array down until I get the new drive tomorrow? And are these definitely failing this couldn't just be a glitch somewhere that could be resolved?
  12. So it turns out both of the unmountable drives have pending sectors. I have a new 8 TB coming in the mail it should be here tomorrow. I've been trying to save up the money to get a parity drive in there. Here are my diagnostics. tower-diagnostics-20180507-0902.zip Also is it safe for me to keep the array up or should I keep it shut down until this is resolved?
  13. So my unraid setup has been running pretty solid for a couple months now. Today I noticed a pending sector on one of my older drives. I don't have a spare but I have enough space that I planned to stop the array remove that one drive from the array restart the array and mount that drive using the unassigned devices plugin. Well I try to stop the array and it gets hung on dismounting user shares I figure no big deal I'll just try turning off the server and turning it back on. I turned it off it went through a graceful shutdown I get back into the webui after it restarts and I set the drive with the pending sector to unused little did I know unraid doesn't allow this. I set the drive back to the proper one then try to start my array and two of my disks are showing up with no filesystem even though only one was having a pending sector issue. I currently have the array shutdown as I'm not sure how to deal with this issue without destroying the data on the disks. I believe I have enough space to move the data over into the array from these drives if not I'll just pick up another drive. What would y'all recommend as a solution to my issue.
  14. Yeah that is by far the most confusing thing to me as well. I generally run my clients with somewhere between 150 to 200 connections. I limited Deluge down to 50 and it's still causing the issue while still not causing and external latency. I have a hard time accessing my router admin panel and unraid due to the latency but I can play online games without any latency or at least no more than is usual ie 30ms. I'm pulling my hair out trying to figure this one out.
  15. My issue isn't speeds I am using one of the port forwarded endpoints I'm having a completely different issue.
  16. My router doesn't have qos settings so unless it's doing something automatically I don't think that would come into play. I'll try upgrading to the latest RC but I was having the same issue in RC 3. I only have around 20-30 torrents in deluge I've comfortably ran over 400 torrents in deluge before yeah it would slow down but never flood my network like this. What's really strange about it is that it doesn't effect external ping times. My ping times to google for instance remain around 30 ms whereas pings to my router or other computers on my local network are varying anywhere from 1ms up to around 600. Like I said running the exact same version of deluge with the same torrents on my desktop isn't causing this issue so it's definitely not an issue with deluge.
  17. Yeah that didn't seem to do anything. This isn't a problem effecting my bandwidth I can download normally and use everything normally it's just causing huge ping times. I've never really bothered to limit my upload in other clients and have never experienced this issue. Even desktop deluge hasn't caused this issue which is why I'm thinking it's something to do with this docker.
  18. So this container seems to just destroy my network when I have the container running I'm getting anywhere from 40 ms to 800 ms ping times to my router from my computer. The rutorrent container doesn't cause the same issue but I much prefer deluge has anyone else experienced this problem and if so what was the solution.