00b5

Members
  • Posts

    202
  • Joined

  • Last visited

Posts posted by 00b5

  1. On 5/15/2023 at 12:45 PM, Kaizac said:

    The context canceled is usually an error reporting a timeout. So maybe the files can not be accessed yet or can't be deleted?

     

    I don't understand why you are using simple copy instead of rclone to move the files? With rclone you are certain that files arrive at their destination and has better error handling in case of problems.

     

    did you mean rsync instead of copy? I've been using this "copy script" for multiple things for years.

     

    The workflow is like this:

    • *darr apps run on my home server, and request files
    • requests are put into a folder, which syncs with a seedbox
    • Seedbox downloads files to a specific folder, which then syncs back to home server
    • *darr apps process/move/etc files and everything is good
    • the copy script is run on a 3rd server that is running plex and rclone to host a 2nd plex server for sharing (i don't share my home plex server) the copy script is just grabbing files (every xx mins) and copying them to the mergerFS folder so they can then be also available for the plex cloud instance. 

    I don't run the *darr apps on the seedbox, it really only seeds, and moves files around with ResilioSync. I used to rent a server to host plex in the cloud tied to gdrive (for when I am remote, and for sharing) since my home upload bandwidth is subpar. Now I have been able to co-locate a server on a nice fiber connection, so I'm trying to move toward using it. The main difference is moving from an online rented server with linux to an owned server running unraid, and this rclone plugin to keep plex using the gdrive source files (at least until it gets killed off). 

     

    I was letting the copy script run every 2 mins to make sure it would grab any files in that sync folder before the other end cleaned up and processed them. I'll try slowing it down, or only letting it run every 10 mins or something and see if I can avoid these weird errors. 

  2. 7 minutes ago, Kaizac said:

    Hard to troubleshoot without the scripts you're running.

    You mean the main mount script, or the one that copies files into the merger folder? 

     

    Main Mount Script

     

    #!/bin/bash
    
    ######################
    #### Mount Script ####
    ######################
    ## Version 0.96.9.3 ##
    ######################
    
    ####### EDIT ONLY THESE SETTINGS #######
    
    # INSTRUCTIONS
    # 1. Change the name of the rclone remote and shares to match your setup
    # 2. NOTE: enter RcloneRemoteName WITHOUT ':'
    # 3. Optional: include custom command and bind mount settings
    # 4. Optional: include extra folders in mergerfs mount
    
    # REQUIRED SETTINGS
    RcloneRemoteName="google" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
    RcloneMountShare="/mnt/user/mount_rclone" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
    RcloneMountDirCacheTime="720h" # rclone dir cache time
    #LocalFilesShare="/mnt/user/local" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
    LocalFilesShare="ignore" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
    RcloneCacheShare="/mnt/user0/mount_rclone" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone
    RcloneCacheMaxSize="600G" # Maximum size of rclone cache
    RcloneCacheMaxAge="336h" # Maximum age of cache files
    MergerfsMountShare="/mnt/user/mount_mergerfs" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
    # DockerStart="nzbget plex sonarr radarr ombi" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
    DockerStart="plex" # list of dockers, separated by space, to start once mergerfs mount verified
    MountFolders=\{"downloads/complete,downloads/intermediate,downloads/seeds,movies,tv"\} # comma separated list of folders to create within the mount
    
    # Note: Again - remember to NOT use ':' in your remote name above
    
    # OPTIONAL SETTINGS
    
    # Add extra paths to mergerfs mount in addition to LocalFilesShare
    LocalFilesShare2="ignore" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder.  Enter 'ignore' to disable
    LocalFilesShare3="ignore"
    LocalFilesShare4="ignore"
    
    # Add extra commands or filters
    Command1="--rc"
    Command2=""
    Command3=""
    Command4=""
    Command5=""
    Command6=""
    Command7=""
    Command8=""
    
    CreateBindMount="N" # Y/N. Choose whether to bind traffic to a particular network adapter
    RCloneMountIP="192.168.1.252" # My unraid IP is 172.30.12.2 so I create another similar IP address
    NetworkAdapter="eth0" # choose your network adapter. eth0 recommended
    VirtualIPNumber="2" # creates eth0:x e.g. eth0:1.  I create a unique virtual IP addresses for each mount & upload so I can monitor and traffic shape for each of them
    
    ####### END SETTINGS #######
    
    ###############################################################################
    #####   DO NOT EDIT ANYTHING BELOW UNLESS YOU KNOW WHAT YOU ARE DOING   #######
    ###############################################################################
    
    ####### Preparing mount location variables #######
    RcloneMountLocation="$RcloneMountShare/$RcloneRemoteName" # Location for rclone mount
    LocalFilesLocation="$LocalFilesShare/$RcloneRemoteName" # Location for local files to be merged with rclone mount
    MergerFSMountLocation="$MergerfsMountShare/$RcloneRemoteName" # Rclone data folder location
    
    ####### create directories for rclone mount and mergerfs mounts #######
    mkdir -p /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName # for script files
    mkdir -p $RcloneCacheShare/cache/$RcloneRemoteName # for cache files
    if [[  $LocalFilesShare == 'ignore' ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: Not creating local folders as requested."
    	LocalFilesLocation="/tmp/$RcloneRemoteName"
    	eval mkdir -p $LocalFilesLocation
    else
    	echo "$(date "+%d.%m.%Y %T") INFO: Creating local folders."
    	eval mkdir -p $LocalFilesLocation/"$MountFolders"
    fi
    mkdir -p $RcloneMountLocation
    
    if [[  $MergerfsMountShare == 'ignore' ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: Not creating MergerFS folders as requested."
    else
    	echo "$(date "+%d.%m.%Y %T") INFO: Creating MergerFS folders."
    	mkdir -p $MergerFSMountLocation
    fi
    
    
    #######  Check if script is already running  #######
    echo "$(date "+%d.%m.%Y %T") INFO: *** Starting mount of remote ${RcloneRemoteName}"
    echo "$(date "+%d.%m.%Y %T") INFO: Checking if this script is already running."
    if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running" ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: Exiting script as already running."
    	exit
    else
    	echo "$(date "+%d.%m.%Y %T") INFO: Script not running - proceeding."
    	touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
    fi
    
    ####### Checking have connectivity #######
    
    echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if online"
    ping -q -c2 google.com > /dev/null # -q quiet, -c number of pings to perform
    if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
    	echo "$(date "+%d.%m.%Y %T") PASSED: *** Internet online"
    else
    	echo "$(date "+%d.%m.%Y %T") FAIL: *** No connectivity.  Will try again on next run"
    	rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
    	exit
    fi
    
    #######  Create Rclone Mount  #######
    
    # Check If Rclone Mount Already Created
    if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: Success ${RcloneRemoteName} remote is already mounted."
    else
    	echo "$(date "+%d.%m.%Y %T") INFO: Mount not running. Will now mount ${RcloneRemoteName} remote."
    # Creating mountcheck file in case it doesn't already exist
    	echo "$(date "+%d.%m.%Y %T") INFO: Recreating mountcheck file for ${RcloneRemoteName} remote."
    	touch mountcheck
    	rclone copy mountcheck $RcloneRemoteName: -vv --no-traverse
    # Check bind option
    	if [[  $CreateBindMount == 'Y' ]]; then
    		echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
    		ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform
    		if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
    			echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
    		else
    			echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for remote ${RcloneRemoteName}"
    			ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber
    		fi
    		echo "$(date "+%d.%m.%Y %T") INFO: *** Created bind mount ${RCloneMountIP} for remote ${RcloneRemoteName}"
    	else
    		RCloneMountIP=""
    		echo "$(date "+%d.%m.%Y %T") INFO: *** Creating mount for remote ${RcloneRemoteName}"
    	fi
    # create rclone mount
    	rclone mount \
    	$Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
    	--allow-other \
    	--umask 000 \
    	--dir-cache-time $RcloneMountDirCacheTime \
    	--attr-timeout $RcloneMountDirCacheTime \
    	--log-level INFO \
    	--poll-interval 10s \
    	--cache-dir=$RcloneCacheShare/cache/$RcloneRemoteName \
    	--drive-pacer-min-sleep 10ms \
    	--drive-pacer-burst 1000 \
    	--vfs-cache-mode full \
    	--vfs-cache-max-size $RcloneCacheMaxSize \
    	--vfs-cache-max-age $RcloneCacheMaxAge \
    	--vfs-read-ahead 1G \
    	--bind=$RCloneMountIP \
    	$RcloneRemoteName: $RcloneMountLocation &
    
    # Check if Mount Successful
    	echo "$(date "+%d.%m.%Y %T") INFO: sleeping for 15 seconds"
    # slight pause to give mount time to finalise
    	sleep 15
    	echo "$(date "+%d.%m.%Y %T") INFO: continuing..."
    	if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
    		echo "$(date "+%d.%m.%Y %T") INFO: Successful mount of ${RcloneRemoteName} mount."
    	else
    		echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mount failed - please check for problems.  Stopping dockers"
    		docker stop $DockerStart
    		rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
    		exit
    	fi
    fi
    
    ####### Start MergerFS Mount #######
    
    if [[  $MergerfsMountShare == 'ignore' ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: Not creating mergerfs mount as requested."
    else
    	if [[ -f "$MergerFSMountLocation/mountcheck" ]]; then
    		echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount in place."
    	else
    # check if mergerfs already installed
    		if [[ -f "/bin/mergerfs" ]]; then
    			echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs already installed, proceeding to create mergerfs mount"
    		else
    # Build mergerfs binary
    			echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs not installed - installing now."
    			mkdir -p /mnt/user/appdata/other/rclone/mergerfs
    			docker run -v /mnt/user/appdata/other/rclone/mergerfs:/build --rm trapexit/mergerfs-static-build
    			mv /mnt/user/appdata/other/rclone/mergerfs/mergerfs /bin
    # check if mergerfs install successful
    			echo "$(date "+%d.%m.%Y %T") INFO: *sleeping for 5 seconds"
    			sleep 5
    			if [[ -f "/bin/mergerfs" ]]; then
    				echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs installed successfully, proceeding to create mergerfs mount."
    			else
    				echo "$(date "+%d.%m.%Y %T") ERROR: Mergerfs not installed successfully.  Please check for errors.  Exiting."
    				rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
    				exit
    			fi
    		fi
    # Create mergerfs mount
    		echo "$(date "+%d.%m.%Y %T") INFO: Creating ${RcloneRemoteName} mergerfs mount."
    # Extra Mergerfs folders
    		if [[  $LocalFilesShare2 != 'ignore' ]]; then
    			echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare2} to ${RcloneRemoteName} mergerfs mount."
    			LocalFilesShare2=":$LocalFilesShare2"
    		else
    			LocalFilesShare2=""
    		fi
    		if [[  $LocalFilesShare3 != 'ignore' ]]; then
    			echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare3} to ${RcloneRemoteName} mergerfs mount."
    			LocalFilesShare3=":$LocalFilesShare3"
    		else
    			LocalFilesShare3=""
    		fi
    		if [[  $LocalFilesShare4 != 'ignore' ]]; then
    			echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare4} to ${RcloneRemoteName} mergerfs mount."
    			LocalFilesShare4=":$LocalFilesShare4"
    		else
    			LocalFilesShare4=""
    		fi
    # make sure mergerfs mount point is empty
    		mv $MergerFSMountLocation $LocalFilesLocation
    		mkdir -p $MergerFSMountLocation
    # mergerfs mount command
    		mergerfs $LocalFilesLocation:$RcloneMountLocation$LocalFilesShare2$LocalFilesShare3$LocalFilesShare4 $MergerFSMountLocation -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true
    # check if mergerfs mount successful
    		echo "$(date "+%d.%m.%Y %T") INFO: Checking if ${RcloneRemoteName} mergerfs mount created."
    		if [[ -f "$MergerFSMountLocation/mountcheck" ]]; then
    			echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount created."
    		else
    			echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mergerfs mount failed.  Stopping dockers."
    			docker stop $DockerStart
    			rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
    			exit
    		fi
    	fi
    fi
    
    ####### Starting Dockers That Need Mergerfs Mount To Work Properly #######
    
    # only start dockers once
    if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/dockers_started" ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: dockers already started."
    else
    # Check CA Appdata plugin not backing up or restoring
    	if [ -f "/tmp/ca.backup2/tempFiles/backupInProgress" ] || [ -f "/tmp/ca.backup2/tempFiles/restoreInProgress" ] ; then
    		echo "$(date "+%d.%m.%Y %T") INFO: Appdata Backup plugin running - not starting dockers."
    	else
    		touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/dockers_started
    		echo "$(date "+%d.%m.%Y %T") INFO: Starting dockers."
    		docker start $DockerStart
    	fi
    fi
    
    rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
    echo "$(date "+%d.%m.%Y %T") INFO: Script complete"
    
    exit

     

     

    Copy script (copies files from a folder that is syncing with another server via Resilio Sync), runs about every 5 mins or so

     

    # btsync capture SCRIPT
    #!/bin/bash
    # exec 3>&1 4>&2
    # trap 'exec 2>&4 1>&3' 0 1 2 3
    #  Everything below will go to the file 'rsync-date.log':
    
    LOCKFILE=/tmp/lock.txt
    if [ -e ${LOCKFILE} ] && kill -0 `cat ${LOCKFILE}`; then
        echo "[ $(date ${date_format}) ] Rsync already running @ ${LOCKFILE}"
        exit
    fi
    
    # make sure the lockfile is removed when we exit and then claim it
    trap "rm -f ${LOCKFILE}; exit" INT TERM EXIT
    echo $$ > ${LOCKFILE}
    
    if [[ -f "/mnt/user/mount_rclone/google/mountcheck" ]]; then
    	echo "[ $(date ${date_format}) ] INFO: rclone remote is mounted, starting copy"
    
    echo "[ $(date ${date_format}) ] #################################### ################"
    echo "[ $(date ${date_format}) ] ################# Copy TV Shows ################"
    echo "[ $(date ${date_format}) ] rsync-ing TV shows from resiloSync:"
    cp -rv /mnt/user/data/TV/* /mnt/user/mount_mergerfs/google/Media/TV/
    
    echo "[ $(date ${date_format}) ] ################# Copy Movies ################"
    echo "[ $(date ${date_format}) ] rsync-ing Movies from resiloSync:"
    cp -rv /mnt/user/data/Movies/* /mnt/user/mount_mergerfs/google/Media/Movies/
    
    echo "[ $(date ${date_format}) ] ###################################################"
    
    else
    	echo "[ $(date ${date_format}) ] INFO: Mount not running. Will now abort copy"
    fi
    
    sleep 30
    rm -f ${LOCKFILE}

     

     

    Here is 10 mins of the log where it tries to copy this file up:

     

    2023/05/15 10:17:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
    2023/05/15 10:18:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
    2023/05/15 10:19:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
    Script Starting May 15, 2023 10:20.01
    
    Full logs for this script are available at /tmp/user.scripts/tmpScripts/unraid_rclone_mount/log.txt
    
    15.05.2023 10:20:01 INFO: Not creating local folders as requested.
    15.05.2023 10:20:01 INFO: Creating MergerFS folders.
    15.05.2023 10:20:01 INFO: *** Starting mount of remote google
    15.05.2023 10:20:01 INFO: Checking if this script is already running.
    15.05.2023 10:20:01 INFO: Script not running - proceeding.
    15.05.2023 10:20:01 INFO: *** Checking if online
    15.05.2023 10:20:02 PASSED: *** Internet online
    15.05.2023 10:20:02 INFO: Success google remote is already mounted.
    15.05.2023 10:20:02 INFO: Check successful, google mergerfs mount in place.
    15.05.2023 10:20:02 INFO: dockers already started.
    15.05.2023 10:20:02 INFO: Script complete
    Script Finished May 15, 2023 10:20.02
    
    Full logs for this script are available at /tmp/user.scripts/tmpScripts/unraid_rclone_mount/log.txt
    
    2023/05/15 10:20:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
    2023/05/15 10:21:02 ERROR : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks%2CresourceKey&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycds6tAjl-sjvsxWVtbO2hr6eHhfX58FibGCOIPFijx8n5_LhEaKRKVeLAmdM7rdxiIM6AnlhInp9n8Bl1IGxgz4oBg": context canceled
    2023/05/15 10:21:02 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: upload canceled
    2023/05/15 10:21:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 596.419Gi (was 596.419Gi)
    2023/05/15 10:22:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 599.515Gi (was 599.515Gi)
    2023/05/15 10:22:44 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: queuing for upload in 5s
    2023/05/15 10:22:47 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: queuing for upload in 5s
    2023/05/15 10:22:47 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: queuing for upload in 5s
    2023/05/15 10:23:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
    2023/05/15 10:24:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
    2023/05/15 10:25:01 ERROR : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks%2CresourceKey&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycdtO-3iUahxqOPfrLtJvUGzKbVbC_jIet8MR1hSM4t-JDEvJGPEXYgjVyO3alao3Jira9AI0ZWLeDbVKmtRXvy9FdQ": context canceled
    2023/05/15 10:25:01 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: upload canceled
    2023/05/15 10:25:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 596.522Gi (was 596.521Gi)
    2023/05/15 10:26:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 599.625Gi (was 599.625Gi)
    2023/05/15 10:26:41 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: queuing for upload in 5s
    2023/05/15 10:27:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
    2023/05/15 10:28:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
    2023/05/15 10:29:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
    Script Starting May 15, 2023 10:30.01
    
    Full logs for this script are available at /tmp/user.scripts/tmpScripts/unraid_rclone_mount/log.txt
    
    15.05.2023 10:30:01 INFO: Not creating local folders as requested.
    15.05.2023 10:30:01 INFO: Creating MergerFS folders.
    15.05.2023 10:30:01 INFO: *** Starting mount of remote google
    15.05.2023 10:30:01 INFO: Checking if this script is already running.
    15.05.2023 10:30:01 INFO: Script not running - proceeding.
    15.05.2023 10:30:01 INFO: *** Checking if online
    15.05.2023 10:30:02 PASSED: *** Internet online
    15.05.2023 10:30:02 INFO: Success google remote is already mounted.
    15.05.2023 10:30:02 INFO: Check successful, google mergerfs mount in place.
    15.05.2023 10:30:02 INFO: dockers already started.
    15.05.2023 10:30:02 INFO: Script complete
    Script Finished May 15, 2023 10:30.02
    
    Full logs for this script are available at /tmp/user.scripts/tmpScripts/unraid_rclone_mount/log.txt
    
    2023/05/15 10:30:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
    2023/05/15 10:31:02 ERROR : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks%2CresourceKey&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycdvY7OCd_9eS-M8zevNS2RwdMIUrvChpIfFvJbZwXkA3WTLZOSbQnxi03cunE_-VdMLlRHt4ElXs-7BokEs1s_V1yQ": context canceled
    2023/05/15 10:31:02 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: upload canceled
    2023/05/15 10:31:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 596.452Gi (was 596.452Gi)
    2023/05/15 10:32:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 599.526Gi (was 599.526Gi)
    2023/05/15 10:32:44 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: queuing for upload in 5s
    2023/05/15 10:33:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
    2023/05/15 10:34:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
    2023/05/15 10:35:02 ERROR : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks%2CresourceKey&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycdsZfHAOT0kaMBaqBO-V9hllnnC1cj2FoFWxpu4k1ugT4MmBnWt5d-4ozDwEbcjp9STh-TGSnC9nmFamo3hhW1ueNw": context canceled
    2023/05/15 10:35:02 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: upload canceled
    2023/05/15 10:35:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 596.583Gi (was 596.583Gi)
    2023/05/15 10:36:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 599.645Gi (was 599.645Gi)
    2023/05/15 10:36:41 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: queuing for upload in 5s
    2023/05/15 10:37:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
    2023/05/15 10:38:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
    2023/05/15 10:39:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
    Script Starting May 15, 2023 10:40.01
    
    Full logs for this script are available at /tmp/user.scripts/tmpScripts/unraid_rclone_mount/log.txt
    
    15.05.2023 10:40:01 INFO: Not creating local folders as requested.
    15.05.2023 10:40:01 INFO: Creating MergerFS folders.
    15.05.2023 10:40:01 INFO: *** Starting mount of remote google
    15.05.2023 10:40:01 INFO: Checking if this script is already running.
    15.05.2023 10:40:01 INFO: Script not running - proceeding.
    15.05.2023 10:40:01 INFO: *** Checking if online
    15.05.2023 10:40:02 PASSED: *** Internet online
    15.05.2023 10:40:02 INFO: Success google remote is already mounted.
    15.05.2023 10:40:02 INFO: Check successful, google mergerfs mount in place.
    15.05.2023 10:40:02 INFO: dockers already started.
    15.05.2023 10:40:02 INFO: Script complete
    Script Finished May 15, 2023 10:40.02
    
    Full logs for this script are available at /tmp/user.scripts/tmpScripts/unraid_rclone_mount/log.txt
    
    2023/05/15 10:40:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
    2023/05/15 10:41:02 ERROR : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks%2CresourceKey&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycdt5s0GvZo7nDCoSPIB_BwQwa-PU1FSe0i8UWPJDOQ_cwFYx6WL33iTkh85OnXiegp5yn9OoRJLn8xAbe94O0fXcZQ": context canceled
    2023/05/15 10:41:02 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: upload canceled
    2023/05/15 10:41:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 596.444Gi (was 596.444Gi)
    2023/05/15 10:42:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 599.494Gi (was 599.494Gi)
    2023/05/15 10:42:44 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: queuing for upload in 5s
    2023/05/15 10:43:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
    2023/05/15 10:44:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
    2023/05/15 10:45:02 ERROR : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks%2CresourceKey&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycds-MUxpNB4t2OVXgjxdH8u9gUF4gTbJb8x_MmVSimgBiAxIl-txOpkWeOKxkJ2NvpBqHTvvYDLC1KwidTegrCt7lA": context canceled
    2023/05/15 10:45:02 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: upload canceled
    2023/05/15 10:45:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 596.457Gi (was 596.457Gi)
    2023/05/15 10:46:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 599.576Gi (was 599.576Gi)
    2023/05/15 10:46:42 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: queuing for upload in 5s
    2023/05/15 10:47:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
    2023/05/15 10:48:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
    2023/05/15 10:49:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
    Script Starting May 15, 2023 10:50.01
    
    Full logs for this script are available at /tmp/user.scripts/tmpScripts/unraid_rclone_mount/log.txt
    
    15.05.2023 10:50:01 INFO: Not creating local folders as requested.
    15.05.2023 10:50:01 INFO: Creating MergerFS folders.
    15.05.2023 10:50:01 INFO: *** Starting mount of remote google
    15.05.2023 10:50:01 INFO: Checking if this script is already running.
    15.05.2023 10:50:01 INFO: Script not running - proceeding.
    15.05.2023 10:50:01 INFO: *** Checking if online
    15.05.2023 10:50:02 PASSED: *** Internet online
    15.05.2023 10:50:02 INFO: Success google remote is already mounted.
    15.05.2023 10:50:02 INFO: Check successful, google mergerfs mount in place.
    15.05.2023 10:50:02 INFO: dockers already started.
    15.05.2023 10:50:02 INFO: Script complete
    Script Finished May 15, 2023 10:50.02
    
    Full logs for this script are available at /tmp/user.scripts/tmpScripts/unraid_rclone_mount/log.txt
    
    2023/05/15 10:50:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
    2023/05/15 10:51:02 ERROR : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks%2CresourceKey&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycdsy0omBKNyQQLp0swJqar7qlA531fiz4eHWL-ZtvsmkRTulOE9QsZkw_8RNZ4kHM8ZFoO220c3HDF06SM3K4nMcyg": context canceled
    2023/05/15 10:51:02 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: upload canceled
    2023/05/15 10:51:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 596.540Gi (was 596.540Gi)
    2023/05/15 10:52:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 599.569Gi (was 599.569Gi)
    2023/05/15 10:52:42 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: queuing for upload in 5s
    2023/05/15 10:53:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
    2023/05/15 10:54:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
    2023/05/15 10:55:01 ERROR : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks%2CresourceKey&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycdsBnUc_AykzI7geN4fr0mzK34xZZkcuOCCDyX2SUFOl4GqYX80eS2xYcpVlqXqyqu3gnyYFJxYLNQbuW5_v1Bly6gJN1CG7": context canceled
    2023/05/15 10:55:01 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: upload canceled
    2023/05/15 10:55:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 596.478Gi (was 596.478Gi)
    2023/05/15 10:56:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 599.554Gi (was 599.554Gi)
    2023/05/15 10:56:43 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: queuing for upload in 5s
    2023/05/15 10:57:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
    2023/05/15 10:58:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
    2023/05/15 10:59:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
    Script Starting May 15, 2023 11:00.01

     

  3. I give up, does anyone know what the hell is this error actually about? 

     

    2023/05/15 10:11:02 ERROR : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks%2CresourceKey&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycdt-Lz90u9Nes1YU9fbno82aTyk9La51mu1QEnq3UbWL3Shb2lLaFGQvwDdR76XjFluBGLd02Gls5nR90LwR_qVyvg": context canceled

     

    A script copies files into the merger_fs folder/share. Most stuff works fine, every now and again the above error happens.

     

    '/mnt/user/data/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv' -> '/mnt/user/mount_mergerfs/google/Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv'

  4. I seem to have messed up this install/etc. 

     

    My main issue is that files seem to be left in the .../mount_rclone/google/.... and the script won't mount again, since there the path isn't empty. 

     

    I've noticed it twice, one was files in the "movies" folder, which I just deleted and let it mount. Second time it was the "TV" folder, and I updated the main script to just mount if not empty. The issue "occured" because the server rebooted/hard powered off (power issues that I can't really remedy with getting a UPS for this backup unraid server). 

     

    The only thing I think I do out of the ordinary is copy files directly into the /mount/mergerfs/google/TV or Movies folder with an automated script.

     

    I'm only running the main script: unraid_rclone_mount

     

    I don't run any "upload script". I'm pretty sure that the files "broken" (left in the mount_rclone) folder never upload to gdrive, they are available to "plex" for example, and listed in mergerfs folder, but not actually in the cloud.

     

    Any ideas or directions? 

  5. I hope this is simple, but;

     

    I only need to start one docker, plex. Script starts it fine, but creates a file to check so it doesn't try to start it again and again (dockers_started) in my google drive root. 

     

    When does this file ever get removed? If I restart the system, for example, this file never gets deleted? (unless I make an unmount script and include removing it as part of it?)

     

    The real issue I want to work around, is that the server is remote to me, and unfortunately seems to lose power at some point monthly (great internet, shitty power). I'm just letting the system reboot at this point, and so the mount happily mounts back up, but the dockers never start (plex). 

     

    Am I just an edge case on this? I can't just delete the file on array start, since I need gdrive mounted first, and if I delete it via the main script, it will just start plex over and over again? 

     

    Got a more elegant way to fix this, something where the script knows it is the first run on this bootup, and can delete the file and allow the dockers to start? 

  6. Solved: Totally not the right password, it looks like the controller was using some other login, I dunno. I found the correct one, and it is NOT the cloud/gui one I updated, so thats that...

     

    Original Post: 

    Anyone point me in the correct direction?

     

    I can't log into my instance any longer. I only have this app/docker and one AP. 

     

    I changed my ui.com password recently (since their breech was a lot worse than they let on). I always had issues logging in, honestly, and had to use the password from the cloud, but now neither the old or new one works on the local docker instance, so no idea what I did.

     

    Any ideas if I can change/reset it? Or should I just make an entire new docker instance/login and let it refind my AP?

  7. I have my win10 VM setup on a 50GB .img file. I added an ssd a whlie back for faster storage for gaming/etc.

     

    I'd really like to JUST use the passed through drive as the boot drive and the storage needed/etc.

     

    Is there a way to get the .img and put it on the phsycially passed through ssd? I will naturally overwrite the ssd with just the boot image/etc and go from there.

     

    this is how my drives are setup now: 

    https://imgur.com/a/pTWivFm

     

    RNmFFRB.png

     

    Thanks!

  8. Just a quick question, I currently have two unraid keys, and was looking at getting a third.

     

    Are there any deals/etc to be had to acquire a third key?

     

    I know they have been in the past, just thought I'd ask.

     

    thanks!

  9. I recently changed my cache drive. I moved all the shares on the cache to "no cache", then manually copied any files off.

     

    Then I assigned the new cache drive, and set all the shares to prefer, and it copied all the data back.

     

    Dockers and VM's work fine, if the cache drive shows up. This only happened recently, and only after making a new cache drive.

     

    I can stop the array (array auto starts after reboot), and then assign the cache drive, then start the array and everything is ok.

     

    Diags attached, though I'm not sure which file I should be looking at for this type of issue.

    galactica-diagnostics-20170426-2109.zip

  10. jonathanm;

     

    Thanks, simple copy-paste !!

     

    aptalca;

     

    I missed that it was converted to linuxserver.io, so i'll look into doing that one. I got it working by adding a bunch of (two really) server{} setups, but i'll look at swapping to the other one next time, since I'm about tapped out today. Thanks!

     

  11. Hello aptalca (and others),

     

    Can someone point me in the right direction, I have two questions/issues. One needs fixing, one is something I want to change.

    I'm sure these have been asked and answered, but my quick looking didn't turn anything up.

     

    1. I've had this docker running for a while, with a htpassword on my primary domain (ex, unraid.domain.space).

     

    Going there generates a password prompt. Good. However, all the proxies (ex, unraid.domain.space/sonarr) do NOT prompt for a password, and are basically wide open. What did I do wrong to miss getting the password on those? If I should add a config/etc, just let me know. I'm hoping it is just something stupid I messed up. None of my docker apps themselves are using passwords/etc that might be built in, so I though I was relying on the htpassword prompt.

     

    2. I'd also like to move from doing unraid.domain.space/sonarr to sonarr.domain.space. Any links/info on how i'd setup those kind of subdomains instead? (I'd like to do all subdomains for these instead of .space/appname). 

     

    Thanks in advance!

  12. My plexpy is borked:

     

    400 Bad Request
    
    Illegal cookie name manage_view
    
    Traceback (most recent call last):
      File "/opt/plexpy/lib/cherrypy/_cprequest.py", line 635, in respond
        self.process_headers()
      File "/opt/plexpy/lib/cherrypy/_cprequest.py", line 737, in process_headers
        raise cherrypy.HTTPError(400, msg)
    HTTPError: (400, 'Illegal cookie name manage_view')
    Powered by CherryPy 5.1.0

     

    This is after moving the config to a new dir (plexpy2) and flat out removing it and re-adding it.

     

    Log isn't much help either:

     

    [s6-init] making user provided files available at /var/run/s6/etc...exited 0.
    [s6-init] ensuring user provided files have correct perms...exited 0.
    [fix-attrs.d] applying ownership & permissions fixes...
    [fix-attrs.d] done.
    [cont-init.d] executing container initialization scripts...
    [cont-init.d] 10-adduser: executing... 
    
    -------------------------------------
    _ _ _
    | |___| (_) ___
    | / __| | |/ _ \ 
    | \__ \ | | (_) |
    |_|___/ |_|\___/
    |_|
    
    Brought to you by linuxserver.io
    We do accept donations at:
    https://www.linuxserver.io/donations
    -------------------------------------
    GID/UID
    -------------------------------------
    User uid: 99
    User gid: 100
    -------------------------------------
    
    [cont-init.d] 10-adduser: exited 0.
    [cont-init.d] 30-install: executing... 
    [cont-init.d] 30-install: exited 0.
    [cont-init.d] done.
    [services.d] starting services
    [services.d] done.

  13. This is the first build - http://pcpartpicker.com/user/Huy/saved/xGd8TW - featuring an i7 6700 on a GA-Z170M-D3H. Is that motherboard even compatible with unRAID? According to wiki, some Gigabyte ones aren't. This - http://pcpartpicker.com/user/Huy/saved/rDjj4D - is the second build, with an E3-1245v5 on an MBD-X11SSM. Also, it features ECC RAM which I read is better for critical applications (but not necessary for a media server).

     

    The Gigabyte board issue is (i think) related to the way they can back up the bios of the MB to the hdd. If it does this, you'll end up with a few 4TB hdds, and one of them will be slightly smaller, since it'll be missing some space for the backup copy of the MB's bios. Simply turn that feature off, and no worries. I had a GB mb used in my unraid for years, until I upgraded to an asrock and haswell pentium cpu recently.

  14. I used the preclear (via gui) to do 2 passes on 4 different USB3 based WD hdds. They all were mybooks, and I removed them from the case when done, and added them to unraid.

     

    Preclear times were about ~125MB/sec, and took (as far as I can tell) about the same time as having it internally would have.

     

    For full disclousre though, I ended up replacing a 2TB with a 4TB (and so on) so I don't actually know if the preclear would have worked correctly, since some of these external USB3 enclosures have built in encryption.

     

    But if you had USB3 (I think 2 might drag on too long) you could run a cable and try that out?

  15. Does anyone do this?

     

    I want to not bother copying data to my laptop, then to my external drive, etc. I have Gbit, but I often use my laptop on wifi, and I don't want to bother running a cable to it.

     

    I want to copy some media to an external hdd, so I can take it on the road with me. I COULD just let a huge copy run overnight, but surly with USB3 on the server and a drive, I can just use that instead?

     

    I have Dolphin docker installed, but when I used it (I think with unassigned devices) it wouldn't let me write files to the drive. The external is probably formatted with NTFS. I could reformat it, though I don't recall if anything is on there that I want.

     

    TL;DR

     

    I want to hook up a USB hdd to my unraid machine, and copy files to it, perferably from a GUI/web, but I guess even CLI would be ok for now. Is anyone doing this, and if so, how?

  16. Linux OS's always cache all the ram (in fact, windows10 does as well now). Its the way they work. Thats normal.

     

    Not sure why your transfer looks all spikey though, are there lots of large (or small) files and maybe its just the slowdown as it changes to the next file/etc.

     

    I also don't think you should be doing it way you describe either, and are you moving it via another machine? Could be that machine/laptop/etc causing the spikes?

  17. At least you didn't drop a $1000 on a drone that will never show up!

     

    Regarding OP first question, I bought WD desktop external drives, and precleared them all while in USB enclourses (but with USB3). They never really got too hot, and I did two passes on them (2 at a time, 4 total). Then I pulled them, and "upgraded" a 2TB disk in each case (so let it rebuild). Entire process took an weekend, but I feel better knowing they were precleared twice before using them.

  18. Might be good to get a full smart report from it now.

     

    It fell out of the array somehow, and unless you went in and checked all the physical connections, you should try to figure out what happened.

     

    BTW, 2 months is not too long for a hdd to start failing. You expect it to last at least its warranty period and then some, but its a spinning mechanical device, so it could do anything.

  19. I'd look around for MB for that cpu that supports vt-d. Its a great CPU, i'd hand on to and use it.

     

    Surely there is more than one, you could probably find any last (last) gen asrock/etc and it should support vt-d. Could even find one with more sata ports/etc?

     

    But ya, keep the intel.

     

    Regarding Win10, you can still spin it up in a VM, and get it activated etc. I don't think it will change so much to invalidate the liscense inside a vm, regardless of the actual MB of the host. Its all emulated anyway (and vt-d, you'd probably just pass through a GPU/USB/etc, which shouldn't do much).

     

    Then again, I have no idea how it even activates in a VM in the first place.