Jump to content

wgstarks

Members
  • Posts

    5,366
  • Joined

  • Last visited

  • Days Won

    4

Everything posted by wgstarks

  1. It means that the ups will shutdown after the computer and then power on again when power is restored triggering the computer to boot (if boot options on the computer are configured for this). If the ups never shuts down then the computer will not detect that power has been restored, so no boot up.
  2. I did this a few years ago by just pointing the LSIO docker to my existing data. The dockers have changed since then though so ymmv. I would think that if you tried and this failed it would be easy to revert back to the LT docker. Haven't ever tested this though.
  3. Latest Linux version of the app is 1.9.1.4272-b207937f1
  4. That looks like a version number for the app, but it's incomplete. The easiest way to install the latest version is to leave the version variable set to "latest".
  5. Right. I think you may actually want to activate both though. That way you can keep the online database and any other players in sync.
  6. It's a setting in the trakt channel in the Plex app. You can sync up and/or down. You'll need to have the app running for syncing up though.
  7. @MowMdown Thanks for this. I modified it slightly, set /incomplete to a UD mounted disk outside the array, but still the same basic idea. Finally got a chance to test a little. Grabbed a decent sized torrent with a few hundred seeders. D/l speeds peaked around 30 MB/s. Never saw them drop below 10 MB/s. Average was probably about 25 MB/s. Looks like my problems were primarily caused by too many slow writes to parity. Is there a good tool for testing speeds?
  8. Not sure what the app market is, but I would recommend you install Community Applications and then search it for the Unassigned Devices plugin. I believe this is what the Preclear plugin is looking for.
  9. Could you post some details on how to do this? I really hate monkeying around with the docker template when I don't know what I'm doing.
  10. I'm going to attempt to relocate my /data. My plan is to stop the Deluge docker and associated indexer dockers and then copy all the folders in /data. Would just using cp -a work alright for this? Don't want to create any permissions issues.
  11. Looks like it might be just a little slower. At 0 and 8T for sure.
  12. I almost forgot about this. Seagate ST8000DM004 shucked from a Seagate Expansion Unit. 8 test points 3 iterations.
  13. It will move the files but no sorting and renaming AFAIK. Haven't really investigated all the Deluge plugins so there may be one that does. Wondered why my up/down speeds keep dropping to zero at random times.
  14. Worth a shot. I need to check the size of my incomplete folder but I think this is worth testing. Thanks. Seems so obvious once someone else thinks of it.
  15. In my case the disk would appear at first and then disappear as soon as I tried to format it.
  16. If I use Radarr to move (rather than copy) the completed file won't Deluge lose the file for seeding?
  17. My /data size is a little over 4TB right now so SSD's are not really an option but I've got a 4TB WD Red that is preclearing. I'll mount it in UD and see what happens using it. Side issue- does anyone know which version of python the docker uses? I really need to get "Auto Remove Plus" plugin installed.
  18. Thanks. I've tried playing around with the upload limit, but didn't seem to make much difference. My speeds never seem to exceed about 10% of the theoretical limit so setting the upload limit to 300MB/s didn't change anything. I almost never get more than 1MB upload speeds. Honestly, I can live with the slow speeds. My real concern is why the docker is bogging down my server so badly. IDK, maybe the two problems are related?
  19. I'm seeing the same speeds (between 0 and 2MB/s) being reported by others. Deluge with PIA Netherlands. This docker is also really slowing down my system. CPU usage is very low (<5%) and network activity is also low of course, but my server really drags. Had to do a data rebuild on a disk replacement this weekend and realized that with Deluge running the rebuild ETA was being measured in months. Once I stopped the docker the rebuild proceeded at a decent speed. Makes me wonder if this isn't being caused by the number of files writing to the array even if most are doing it slowly? Right now I'm using a parity protected unRAID share for my Deluge /data path. Would it be better if I mounted a drive in UD via esata or USB3 for my /data instead of writing the torrents to a parity protected share?
×
×
  • Create New...