Ultra-Humanite

Members
  • Posts

    10
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Ultra-Humanite's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Hi, I would like to run both OSs at maximum possible performance via unraid, I'll never need to run both at the same time. As far as I can tell in order to get the best performance I need to pass the Samsung NVME control to the VMs to get the best read/write performance. I have a 1TB m2 nvme drive that I would like to mount as an unassigned device and partition to use the first partition for macos vm and the second one for win10 vm passing the Samsung NVME controller to each while that specific VM is running, is this possible? I'm relatively new to VMs having only ran them via images which is relatively straight forward. If what I described above is not possible I'm open to any other suggestions or solutions. Any help and guidance is greatly appreciated, thanks!
  2. If I move anything to the /mnt/disks/plex/gdrive_media_vfs for example the tv folder from /mnt/disks/plex/tv it gets uploaded to googledrive no problem. I guess that's a solution.
  3. Thanks for a fast response. In my initial question I miss spoke when I said local as by local I meant to say that any thing I placed in "/mnt/user/local/gdrive_media_vfs" was uploaded without any issue. If I set LocalFilesShare="/mnt/disks/Plex" in mount and upload scripts I get and an empty gdrive_media_vfs folder in the plex folder and this error message when I run the upload script: 2020/06/20 14:22:11 INFO : Starting bandwidth limiter at 20MBytes/s 2020/06/20 14:22:11 INFO : Starting HTTP transaction limiter: max 8 transactions/s with burst 1 2020/06/20 14:22:12 DEBUG : mountcheck: Excluded 2020/06/20 14:22:12 DEBUG : Encrypted drive 'gdrive_media_vfs:': Waiting for checks to finish 2020/06/20 14:22:12 DEBUG : Encrypted drive 'gdrive_media_vfs:': Waiting for transfers to finish 2020/06/20 14:22:12 DEBUG : {}: Removing directory 2020/06/20 14:22:12 DEBUG : Local file system at /mnt/disks/Plex/gdrive_media_vfs: deleted 1 directories 2020/06/20 14:22:12 INFO : There was nothing to transfer Here is my config: [gdrive] type = drive client_id = client_secret = scope = drive token = server_side_across_configs = true root_folder_id = [gdrive_media_vfs] type = crypt remote = gdrive:gdrive_media_vfs filename_encryption = standard directory_name_encryption = true password = password2 = Here is the mount script: #!/bin/bash ###################### #### Mount Script #### ###################### ### Version 0.96.7 ### ###################### ####### EDIT ONLY THESE SETTINGS ####### # INSTRUCTIONS # 1. Change the name of the rclone remote and shares to match your setup # 2. NOTE: enter RcloneRemoteName WITHOUT ':' # 3. Optional: include custom command and bind mount settings # 4. Optional: include extra folders in mergerfs mount # REQUIRED SETTINGS RcloneRemoteName="gdrive_media_vfs" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data RcloneMountShare="/mnt/user/mount_rclone" # where your rclone remote will be located without trailing slash e.g. /mnt/user/mount_rclone LocalFilesShare="/mnt/disks/Plex" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable MergerfsMountShare="/mnt/user/mount_mergerfs" # location without trailing slash e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable DockerStart="" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page MountFolders=\{""\} # comma separated list of folders to create within the mount # Note: Again - remember to NOT use ':' in your remote name above # OPTIONAL SETTINGS # Add extra paths to mergerfs mount in addition to LocalFilesShare LocalFilesShare2="ignore" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder. Enter 'ignore' to disable LocalFilesShare3="ignore" LocalFilesShare4="ignore" # Add extra commands or filters Command1="--rc" Command2="" Command3="" Command4="" Command5="" Command6="" Command7="" Command8="" CreateBindMount="N" # Y/N. Choose whether to bind traffic to a particular network adapter RCloneMountIP="192.168.1.252" # My unraid IP is 172.30.12.2 so I create another similar IP address NetworkAdapter="eth0" # choose your network adapter. eth0 recommended VirtualIPNumber="2" # creates eth0:x e.g. eth0:1. I create a unique virtual IP addresses for each mount & upload so I can monitor and traffic shape for each of them ####### END SETTINGS ####### Here is the upload script: #!/bin/bash ###################### ### Upload Script #### ###################### ### Version 0.95.5 ### ###################### ####### EDIT ONLY THESE SETTINGS ####### # INSTRUCTIONS # 1. Edit the settings below to match your setup # 2. NOTE: enter RcloneRemoteName WITHOUT ':' # 3. Optional: Add additional commands or filters # 4. Optional: Use bind mount settings for potential traffic shaping/monitoring # 5. Optional: Use service accounts in your upload remote # 6. Optional: Use backup directory for rclone sync jobs # REQUIRED SETTINGS RcloneCommand="move" # choose your rclone command e.g. move, copy, sync RcloneRemoteName="gdrive_media_vfs" # Name of rclone remote mount WITHOUT ':'. RcloneUploadRemoteName="gdrive_media_vfs" # If you have a second remote created for uploads put it here. Otherwise use the same remote as RcloneRemoteName. LocalFilesShare="/mnt/disks/Plex" # location of the local files without trailing slash you want to rclone to use RcloneMountShare="/mnt/user/mount_rclone" # where your rclone mount is located without trailing slash e.g. /mnt/user/mount_rclone MinimumAge="15m" # sync files suffix ms|s|m|h|d|w|M|y ModSort="ascending" # "ascending" oldest files first, "descending" newest files first # Note: Again - remember to NOT use ':' in your remote name above # Bandwidth limits: specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. Or 'off' or '0' for unlimited. The script uses --drive-stop-on-upload-limit which stops the script if the 750GB/day limit is achieved, so you no longer have to slow 'trickle' your files all day if you don't want to e.g. could just do an unlimited job overnight. BWLimit1Time="01:00" BWLimit1="off" BWLimit2Time="08:00" BWLimit2="20M" BWLimit3Time="16:00" BWLimit3="20M" # OPTIONAL SETTINGS # Add name to upload job JobName="_daily_upload" # Adds custom string to end of checker file. Useful if you're running multiple jobs against the same remote. # Add extra commands or filters Command1="--exclude downloads/**" Command2="" Command3="" Command4="" Command5="" Command6="" Command7="" Command8="" # Bind the mount to an IP address CreateBindMount="N" # Y/N. Choose whether or not to bind traffic to a network adapter. RCloneMountIP="192.168.1.253" # Choose IP to bind upload to. NetworkAdapter="eth0" # choose your network adapter. eth0 recommended. VirtualIPNumber="1" # creates eth0:x e.g. eth0:1. # Use Service Accounts. Instructions: https://github.com/xyou365/AutoRclone UseServiceAccountUpload="N" # Y/N. Choose whether to use Service Accounts. ServiceAccountDirectory="/mnt/user/appdata/other/rclone/service_accounts" # Path to your Service Account's .json files. ServiceAccountFile="sa_gdrive_upload" # Enter characters before counter in your json files e.g. for sa_gdrive_upload1.json -->sa_gdrive_upload100.json, enter "sa_gdrive_upload". CountServiceAccounts="15" # Integer number of service accounts to use. # Is this a backup job BackupJob="N" # Y/N. Syncs or Copies files from LocalFilesLocation to BackupRemoteLocation, rather than moving from LocalFilesLocation/RcloneRemoteName BackupRemoteLocation="backup" # choose location on mount for deleted sync files BackupRemoteDeletedLocation="backup_deleted" # choose location on mount for deleted sync files BackupRetention="90d" # How long to keep deleted sync files suffix ms|s|m|h|d|w|M|y ####### END SETTINGS #######
  4. I need a little help here ... Maybe a lot of help. So I have everything setup to the point where when I put a file and/or directory into the "local" share and run the "rclone_upload" script everything is uploaded to googledrive and deleted in the local share, as expected. My plex library is located on an unassigned devices disk /mnt/disks/plex, I would like to transfer that whole library to google drive but can't figure out how to mount the /mnt/disks/plex via the rclone_mount script so rclone has access to it and uploads it to google drive when I run "rclone_upload" script.
  5. Is any one using node-red-contrib-nbrowser? If I try to use it my docker image crashes. Never mind went in a different direction.
  6. I wouldn't say it awful but probably that's just me being used to Safari not font smoothing as effectively on an external display, on the internal retina display in Safari font smoothing/anti-aliasing appears to be much better although the font still looks bolder in Firefox.
  7. Everything looks fine on a Mac in Firefox 61.0.2 and Safari 11.1.2, the font weight looks different but it's clear and legible in both browsers. External display at 3440x1440.