Jump to content

Cliff

Members
  • Content Count

    33
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Cliff

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Thanks, What is the reason for not getting a QLC NVMe? And do you have any tips for other 1TB NVMe's ?
  2. I have some questions. How much does memory speed matter if gaming is not a big priority? I was thinking about getting 2x16GB 3200MHz RAM. Is that a good choice? Also If I get get a 1TB nvme M.2 HDD can I share the space with other VM's or docker containers? Do I have to specify a disk size value for the win10 VM or can everything share the same disk ? Can't I use GPU transcoding if running a plex docker?
  3. I am planning on building a 3900X system as cheap as possible. I have an old game-pc with an i7 2600K cpu and 1060GTX GPU that I want to replace with a virtual Windows 10 machine. I will be using the Win10 VM for general work and casual gaming. I also would like to run some more VM's for testing. 10-15 containers like plex, pi-hole, rutorrent, vscode... Do I have reason not to go with a B450 motherboard? I will not do any high end gaming or similar. Do you have any other suggestions regarding my choice of components? Components CPU: AMD Ryzen 3900X with stock cooler Motherboard: Some cheap B450 motherboard like: Asus ROG Strix B450-F Gaming or similar. RAM: Some cheap 32GB (2x16GB) 3200MHz DDR4 (And am planning to upgrade to 64GB in the future) M.2 SSD: Intel 660p Series M.2 2280 SSD 1TB PSU: Fractal Design Edison M 750W (Gold) Storage HDD: 1 x 8TB (Seagate Exos 7E8 ST8000NM0055 256MB 8TB) and planning to buy a few more in the future. Enclosure: Unknown GPU: GTX 1060 (Already owned)
  4. Ok, so I can write files directly to that folder and they get transferred to Google drive? Sorry for being slow but in that case why are the rclone_upload folder needed? And hopefully one last question, when installing the dockers for sonarr/radarr I should provide folder-mappings for "tv" and "downloads". From the first page I understand that the tv-folder sould be mapped to in my case: "/mnt/user/mount_unionfs/google_vfs/Media/Tv" But do I need any "downloads" -mapping?
  5. I tried following the spaceinvader one seedbox-guide on youtube and use it with this script. But I have some problems that I hope someone has a solution to. After a torrent is finished and unpacked on the seedbox I use syncthing to transfer the file back to the unraid-server. I mapped the folder "rclone_upload/google_vfs" in the syncthing container and added the path "/Media/Movies" inside the syncthing container as it matches my google drive paths. The first time everything works great and all files are uploaded to the correct google drive folder. But then the script removes the folders and syncthing breakes down as the "folder-markers" are deleted. If I remove the "--delete-empty-src-dirs" from the rclone_upload script nothing gets deleted and the folders will fill upp. Does anyone have any solution to how to fix this problem?
  6. I need to install a python plugin in domoticz but I need to install both "python3-httplib2" and "git" to be able to run it. Is this possible someway using the docker? I tried opening the console but could not use apt-get inside the docker to install the required packages.
  7. Ok thanks for all the help. I am trying it out now. Just one more question that is slightly of topic. Before when I was experimenting with a cache I was running into some problems where I ran out of RAM or hdd space when I was using plex to scan and add all my media through the remote. After that unraid stopped responding and I had to reboot to be able to get unraid to work again. Is there any settings I need to change in the plex docker when scanning my remotes to prevent it from filling up the docker container? I have only 12GB of RAM in my current server, do I need to lower any settings in the mountscript? or should it work anyway?
  8. Ok, thanks for the answer. But if I mount gdrive directly I guess that I will not be able to take advantage of the crypt-cache stuff that improved plex-streaming? As it says on the first page? "I use a rclone vfs mount as opposed to a rclone cache mount as this is optimised for streaming, has faster media start times, and limits API calls to google to avoid bans."
  9. I just noticed when running the uploadscript and looking at the log file it seams too loop through my entire media library on google drive with "skipping undecryptable filename" ? has that something to do with it? I already have a couple of TB with unencrypted media on my google drive that I want to show up in my mount folder
  10. the mount script is exactly as in github, the only thing I modified was commenting out the starting of the docker-containers
  11. #!/bin/bash ####### Check if script is already running ########## if [[ -f "/mnt/user/appdata/other/rclone/rclone_mount_running" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Exiting script already running." exit else touch /mnt/user/appdata/other/rclone/rclone_mount_running fi ####### End Check if script already running ########## ####### Start rclone gdrive mount ########## # check if gdrive mount already created if [[ -f "/mnt/user/mount_rclone/google_vfs/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Check rclone vfs already mounted." else echo "$(date "+%d.%m.%Y %T") INFO: mounting rclone vfs." # create directories for rclone mount and unionfs mount mkdir -p /mnt/user/appdata/other/rclone mkdir -p /mnt/user/mount_rclone/google_vfs mkdir -p /mnt/user/mount_unionfs/google_vfs mkdir -p /mnt/user/rclone_upload/google_vfs rclone mount --allow-other --buffer-size 256M --dir-cache-time 72h --drive-chunk-size 512M --fast-list --log-level INFO --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off gdrive_media_vfs: /mnt/user/mount_rclone/google_vfs & # check if mount successful # slight pause to give mount time to finalise sleep 5 if [[ -f "/mnt/user/mount_rclone/google_vfs/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Check rclone gdrive vfs mount success." else echo "$(date "+%d.%m.%Y %T") CRITICAL: rclone gdrive vfs mount failed - please check for problems." rm /mnt/user/appdata/other/rclone/rclone_mount_running exit fi fi ####### End rclone gdrive mount ########## ####### Start unionfs mount ########## if [[ -f "/mnt/user/mount_unionfs/google_vfs/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Check successful, unionfs already mounted." else unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/rclone_upload/google_vfs=RW:/mnt/user/mount_rclone/google_vfs=RO /mnt/user/mount_unionfs/google_vfs if [[ -f "/mnt/user/mount_unionfs/google_vfs/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Check successful, unionfs mounted." else echo "$(date "+%d.%m.%Y %T") CRITICAL: unionfs Remount failed." rm /mnt/user/appdata/other/rclone/rclone_mount_running exit fi fi ####### End Mount unionfs ########## ############### starting dockers that need unionfs mount ###################### # only start dockers once if [[ -f "/mnt/user/appdata/other/rclone/dockers_started" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: dockers already started" else touch /mnt/user/appdata/other/rclone/dockers_started echo "$(date "+%d.%m.%Y %T") INFO: Starting dockers." # docker start plex # docker start ombi # docker start tautulli # docker start radarr # docker start sonarr fi ############### end dockers that need unionfs mount ###################### exit
  12. I am using all scripts from github unmodified. If I try the same thing using a cache instead and remove the -vfs options in the mountscript it seams to work but when using the crypt nothing gets mounted in any of the created folders. I tried placing a small -nfo file in the upload-folder and after running the upload-script I have a small encrypted file in my media-folder, and also a mountcheck file.
  13. I have tried like 5 times now following the guide exactly. I even moved to a new server with new hardware with same result. The closest I have got is that I now got a "crypt file" with random letters in my media folder on google drive which is the mountcheck I believe. If I add another random file to /user/mount_rclone/google_vfs/ I get another encrypted file in my media folder. But nothing from google drive shows up on my server in any of the folders. And if I try with a standard rclone mount command I can mount my google drive to a folder without problems.
  14. I tried changing to a cache instead and mounted manually. Now I get a folder and can see all my google drive media. But when I add that folder to Radarr and try to do a bulk-import of movies the log get flooded with: "Unraid emhttpd: error: get_filesystem_status, 6512: Operation not supported (95): getxattr: /mnt/user/media" Does anyone know why?
  15. Ok, but all folders are empty and nothing gets uploaded or downloaded from the crypt-folder on my google drive. And where am I suppose to see all my mediafiles from the google drive to be able to add them to plex?