Jump to content

DZMM

Members
  • Posts

    2,801
  • Joined

  • Last visited

  • Days Won

    9

Everything posted by DZMM

  1. Ahh didn't spot that in the docker description - thanks
  2. I have the exact same question. I've got the docker up and running and I can see it's creating a calibredb file and is automatically adding files. Do other people still run the calibre docker on top to edit files? And do edits made in Calibre get automatically picked up/synchronised by the LL docker? Thanks
  3. This docker is running version 3.1.1 released Nov 2017 and the current version is 3.29 released last week - any plans to upgrade? Thanks
  4. I know, but I love that my machine is a multi-tasking beast and I don't want resources unavailable when they could be available when they are not being used. My VMs do ok - occasionally I'll get stutters like today when I was doing a lot of stuff and the kids were on as well and the VMs would hang now and then for a second, but that's it. I've isolated the cores of my dockers, so only vital dockers (plex, tvh, home assistant etc) have access to all of them. It's bearable, but I'm curious to see if the stutters reduce by tweaking my pinning.
  5. Thanks. I'm going to try running two VMs on 14 and two on 15 to see what happens Vs all on 1,15 - that's if I can tell the difference. I'm not keen on isolating cores - I don't have anything that's mission critical, or so important that it warrants taking potential resources away from other tasks.
  6. Thanks. My VMs are all on different cores, my query was just how many VMs on a emulator pin pair is 'too much'. I think I'm going to go back to spreading the cores my 4 VMs are pinned to, with 2 VMs to each pair Ok, going to avoid core 0. I think pinning to 0 might explain stutters I've been getting when unRAID/dockers are busy. Do you mean you run the emulator pin just on a hyper threaded core?
  7. Just found - didn't know this existed. Thanks
  8. I summoned up the courage to do some tagging today for the first time in ages and I got this error - does the docker need updating soon? /usr/lib/python2.7/site-packages/pylast/__init__.py:51: UserWarning: You are using pylast with Python 2. Pylast will soon be Python 3 only. More info: https://github.com/pylast/pylast/issues/265
  9. I'm playing it safe for now and letting Plex decide which one to use, as I have lots of different clients connecting. HDD space isn't an issue for me
  10. it should select all the english, or when no language is set, or when just a single language (e.g. foreign disk) and not the mvcvideo - I haven't tested yet
  11. Thanks @saarg for the example and @Djoss for the link. I've gone with -sel:all,+sel:(favlang|nolang|single),-sel:mvcvideo,=100:all,-10:favlang I didn't really understand what -sel:(havemulti|havecore) was doing, particularly havecore but I've removed it as I'm hoping plex will know which track to select if I'm say using a laptop rather than transcoding. I'm ditching mvcvideo as I don't have any 3D content
  12. Thanks - I've started googling but there seems to be a lot to it. I'll wait to see your profile to see if there are other items in there.
  13. Thanks for this useful docker. I've just realised after several rips that it wasn't defaulting to selecting all audiostreams so I've missed my TrueHD streams on several rips ? Is there a way to to set this? Thanks
  14. DZMM

    Plexdrive

    - I just updated my main post with my latest scripts - sorry, I wasn't clear. my install script runs every 5 mins to automatically remount if there's a problem. because of the 5 min delay it runs after the 'uninstall' script that runs at start - checkers 10 just a random number to be honest. I kept them low to start with as I had ram problems when I first started using rclone, but now that you've reminded me I'm going to try increasing to a much higher number to ensure I hit the bwlimt 247 - if there are no .unionfs files or folders, yes it throws up an 'error', but the script still works fine
  15. DZMM

    Plexdrive

    'touch' just creates a blank file at the start that's removed. I don't think checking the tmp directory will work as scripts don't seem to be removed straightaway when they've finished - I've just looked in my /tmp/user.scripts/tmpScripts directory and there are residual files still there for scripts that have ended.
  16. DZMM

    Plexdrive

    Ahh I just realised what you mean. I'm creating my own check if the script is running by creating a temporary dummy file when it starts, not by querying the actual script file - I borrowed the idea from the mountcheck file in the original script. I have a share /mnt/user/mount_rclone on my server so I dump it there - you were getting the errors before because you need to choose a real location on your own server ####### Check if script already running ########## if [[ -f "/mnt/user/mount_rclone/rclone_install_running" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Exiting script already running." exit else touch /mnt/user/mount_rclone/rclone_install_running fi ####### End Check if script already running ##########
  17. DZMM

    Plexdrive

    I'm amazed how much difference it's made to my spinups and I wish I'd thought of it before I think that's where it keeps running logs etc in memory. The actual scripts are at /boot/config/plugins/user.scripts/scripts/ which is the path you should add to dockers when you call the script - I've started doing this again as I'm confident it's all working now I do a check at the start of the script to see if an instance is already running, if rclone is already mounted etc etc follow the flow and you'll see it works
  18. This has definitely fixed my errors after restarting problem
  19. DZMM

    Plexdrive

    I've updated my script post with a few updates I've made: new rclone mount settings to improve start times and reduce API calls I run uninstall script at array start as well in case of a unclean shutdown upload script now excludes .unionfs/ folder ( @slimshizn I think this might be your problem) upload script alternates between cache and one array drive at a time, to try and reduce pointless transfers to the array and also multiple array drives spinning up at the same time for the 4 transfers
  20. DZMM

    Plexdrive

    yes if it has been deleted. I think if you rename a RO file via unionfs - unionfs creates a new file in the RW folder to be uploaded and hides the old file in the RO folder i.e you have to upload it all again, so it's best to make sure you're happy with everything before uploading and why it's probably best not to run your upload script too often to let things 'settle down' Edit: actually I think it does the same as below - hides the old name and when an app clicks on the new named file, it just opens the old file. I do think there's a risk that when the upload script comes around it might download the old copy so it can upload the new copy i.e. wasted effort Ditto with moving. I think moving is safe if you don't use the cleanup script as I think unionfs will hide the old location and pretend the file is in the new location. However the cleanup script will mess things up I think as it only works for files in the original folder location, and doesn't read unionfs data in the _HIDDEN file that gives the new location.
  21. DZMM

    Plexdrive

    yes - if radarr tells unionfs to delete a file from the unionfs mount which is in the local RW folder it does it straightaway, if it's a file that's already been uploaded to the RO folder it hides it so radarr (and everything else) thinks it's been deleted
  22. DZMM

    Plexdrive

    Unionfs hides the old media immediately creating a behaviour just like a normal drive, which keeps radarr happy. The cleanup script only ensures the file is actually deleted rather than hidden in case you rebuild your unionfs mount or mount differently and suddenly find all the hidden files re-appearing
  23. DZMM

    Plexdrive

    Radarr should never see two files in the folder if it's looking at the unionfs mount - when file1.mkv moves from the local upload folder to the cloud it will always appear to radarr that that it was always in the same place in the unionfs mount
×
×
  • Create New...