Jump to content

ConnectivIT

Members
  • Content Count

    100
  • Joined

  • Last visited

Community Reputation

1 Neutral

About ConnectivIT

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Thanks for letting me know. For anyone else facing this issue, I'm ended up excluding these in CA Appdata Backup: /mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Metadata, /mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Media And then backing up those paths separately using this Borg script: https://www.reddit.com/r/unRAID/comments/e6l4x6/tutorial_borg_rclone_v2_the_best_method_to/ There are many options for this. The above script is one possible solution and the post discusses some of the others too.
  2. Thanks for all the work you've put in to this, @Squid I think a number of people are having issues with how long backup/verify is taking (particularly users of Plex docker with hundreds of thousands or even millions of files) I had an idea that might resolve this: If we think of the current backups as being "offline backup" (takes place while the dockers are offline) it would be nice if we could specify a list of folders for "online backup" (ie. paths that don't contain databases and are safe to make copies of while the dockers are online) So in the settings you could specify something like: online backup folders: /mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Metadata, /mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Media These paths would automatically be excluded from the normal "offline backup". Once the offline backup is completed, the dockers would be restarted and then the script would start copying the folders above in to a second .tar.gz file. For purposes of restore / trimming old backups, the pair of .tar.gz files could be treated as a single backup edit: or just make the "online backup" append to the same .tar.gz file? That would mean a lot less changes required elsewhere in the script.
  3. See my post above - NZBGet plugins generally require Python 2.7 so I had to switch my nzbget docker container to linuxserver/nzbget because the binhex container includes Python 3.x. As long as you point linuxserver/nzbget to the same docker paths correctly this should "just work" (make a backup!) I'm guessing you have already checked "Scripts" path in NZBGet (this is usually something like ${MainDir}/scripts) If your ${MainDir} is /data then you'll need your scripts in /data/scripts. Your custom scripts should each go in their own folder, so from inside the docker you should have something like: /data/scripts/VideoSort/VideoSort.py (and a bunch of other files)
  4. Not sure if this is related, but I was finding NZBGet downloads falling over with complete RAR archives that it was failing to extract. I never had this problem under Windows. I assumed it was some kind of issue with nested RAR files and added this NZBGet script: https://forum.nzbget.net/viewtopic.php?t=1690&start=20 Whatever the issue was, I haven't had any problems since. Note that I had to switch from binhex to linuxserver docker image, as I use a few NZBGet scripts that require python 2.7 (binhex includes python 3.x)
  5. Thanks for the docker container. Has anyone else been able to get GPU acceleration working with this? I've followed the instructions for generic dockers here: I'm guessing the docker container needs some additional resources to make this work? /opt/shinobi # nvidia-smi sh: nvidia-smi: not found https://shinobi.video/docs/gpu
  6. Just a heads up, I had to switch to linuxserver/nzbget because I couldn't get VideoSort.py plugin working. Fairly sure this is because binhex/arch-nzbget includes python 3.7.x, whereas linuxserver/nzbget uses python2.7: https://hub.docker.com/r/linuxserver/nzbget/dockerfile (python2 \)
  7. You should have no issues. Any compatibility issues are going to arise from your motherboard/storage controller selection rather than this CPU. (Unless you're planning to make heavy use of docker/VM, dual X5550 is overkill. One would be fine and would save on your power bills!)
  8. I think I might know what's going on here and why this works for some people and not others. Compare packages from my fresh plugin install to the packages from StevenD's archive above: https://imgur.com/a/nK7uNi2 I think the difference might be that StevenD was building on a previous version of unRAID / openVMTools_auto.plg and already had previous versions of some of the packages at \\unraidName\flash\packages Because his version of the plugin already had the file and packages download, he didn't have to update them to the latest version. This was my first time using openVMTools_auto.plg , so I was forced to update the plugin file to grab later versions of automake, glibc, etc. from here: http://ftp.slackware.com/pub/slackware/slackware64-current/slackware64/l/ Presumably there's some incompatibility with later versions of one or more of these tools?
  9. I'm also unable to get the auto plugin to work, even after updating all the slackware packages. Unraid 6.5.2 Earlier it was mentioned that stable-10.2.0.tar.gz should be 4MB - mine is 3,126KB, is this what others (who have this working) are seeing? Are other people seeing this error?
  10. Hi, Welcome To install the plugin, select Plugins inside unRAID web interface, Install Plugin tab, then enter this URL: https://raw.githubusercontent.com/StevenDTX/unRAID-openVMTools/master/openVMTools-test.plg A new version is being worked on, but this current release seems to work fine as far as VMWare control of shutdowns. unRAID already includes vmxnet3 drivers, so hypervisor initiated shutdown was really the only feature missing.
  11. My test/backup server was on 6.4.0_rc13 - plugin not compatible (kernel too old) Updated unraid to 6.4.1_rc1 (just released <24 hours ago): Open-VM-Tools is not available for Kernel 4.14.15 Please update the plugin. Check here: http://lime-technology.com/forum/index.php?topic=38279.0 for more information. I tried to trick the plugin to install anyway by downloading and editing the .plg file, but it's smarter than me: edit: tried editing the contents of the .tar.gz but still had issues. Reverted back to 6.4.0 - worked perfectly. Thank you for your efforts and thanks again to Zeron!
  12. I'm using it to boot 6.4 currently - but I'm not using UEFI, that might be why? Thanks to you and Zeron for your efforts!
  13. Attached a plop ISO that will boot an UNRAID thumbdrive via USB passthrough. plop-bootusb.iso
  14. I use "plop" ISO mounted to CD in VMWare to boot a pass-thru USB device (UNRAID thumb drive). I could send the ISO if you're willing to give this a try? I could try taking on your build scripts but I'm not really sure what's involved.
  15. Yes, completely understand that. Me too! unRAID has been stable for me as a VMWare guest for about 7 years, so not sure if this was some kind of hardware issue / anomaly or some compatibility issue with the latest beta running as a VMWare guest. Array is up and running again, thanks for looking in to it. Will post if I see any more issues.