niXta- Posted November 18, 2020 Share Posted November 18, 2020 8 hours ago, crazyhorse90210 said: For some reason I had a problem with line 241 in the new script where you check if CA Appdata is backing up or restoring. I had to remove the outside square brackets on the conditional in order for it to work... weird: # Check CA Appdata plugin not backing up or restoring if [[ -f "/tmp/ca.backup2/tempFiles/backupInProgress" ] || [ -f "/tmp/ca.backup2/tempFiles/restoreInProgress" ]]; then kept getting this error: /tmp/user.scripts/tmpScripts/rclone_mount/script: line 241: syntax error in conditional expression /tmp/user.scripts/tmpScripts/rclone_mount/script: line 241: syntax error near `]' /tmp/user.scripts/tmpScripts/rclone_mount/script: line 241: ` if [[ -f "/tmp/ca.backup2/tempFiles/backupInProgress" ] || [ -f "/tmp/ca.backup2/tempFiles/restoreInProgress" ]] ; then' Script Finished Nov 17, 2020 16:41.15 so i removed the outside brackets and it worked as such with no error: # Check CA Appdata plugin not backing up or restoring if [ -f "/tmp/ca.backup2/tempFiles/backupInProgress" ] || [ -f "/tmp/ca.backup2/tempFiles/restoreInProgress" ]; then I also noticed this yesterday, it should either be with single brackets or double brackets. # Check CA Appdata plugin not backing up or restoring if [[ -f "/tmp/ca.backup2/tempFiles/backupInProgress" ]] || [[ -f "/tmp/ca.backup2/tempFiles/restoreInProgress" ]]; then @DZMM Hope you didn't chug it all at once Quote Link to comment
MowMdown Posted November 18, 2020 Share Posted November 18, 2020 (edited) On 11/17/2020 at 9:10 AM, axeman said: This might be a me thing - since I need the rclone mounts available to my Windows machines, I have it in /mnt/user . If I try to copy a file into unioned mount from inside of UnRaid via MC, it works exactly as you'd want... the file goes right to the local share. However, if I do the same from a windows machine - it fails. Interestingly this doesn't seem to be problem on an Android device (using SolidExplorer). Nor does it happen with the @DZMM mergerfs based scripts. That is strange, Im not sure why it would write to the _vfs upstream if you have your local path listed first in the union when writing to the union... for the record you can export /mnt/disks/some_dir as a share through SMB example: #unassigned_devices_start #Unassigned devices share includes include = /tmp/unassigned.devices/smb-settings.conf #unassigned_devices_end [some_dir] path = /mnt/disks/some_dir comment = browsable = yes # Public public = yes writeable = yes vfs object = You simply add this to the SMB extras under Settings > SMB Edited November 18, 2020 by MowMdown Quote Link to comment
MowMdown Posted November 18, 2020 Share Posted November 18, 2020 (edited) 18 hours ago, eqjunkie829 said: Is there a way for me to modify the Mount Script Version 0.96.9.1 to disable caching completely? just change --vfs-cache-mode full to --vfs-cache-mode off Edited November 18, 2020 by MowMdown 1 Quote Link to comment
Bolagnaise Posted November 20, 2020 Share Posted November 20, 2020 Just wanted to post this here incase anyone else is experiencing high cpu usage on beta 35 with the new nvidia drivers. Change your docker config paths from mnt/user/cache to mnt/cache/. My 9900K was being maxed out on new beta attempting to access files using SHFS. Quote Link to comment
JonathanM Posted November 20, 2020 Share Posted November 20, 2020 5 hours ago, Bolagnaise said: Change your docker config paths from mnt/user/cache to mnt/cache/. If /mnt/user/cache exists, something is wrong already. There should NOT be a user share named cache. It should only exist as a disk share. Quote Link to comment
DZMM Posted November 20, 2020 Author Share Posted November 20, 2020 FYI Important Security update to fix potentially vulnerable passwords: https://forum.rclone.org/t/rclone-1-53-3-release/20569 1 Quote Link to comment
Bolagnaise Posted November 21, 2020 Share Posted November 21, 2020 15 hours ago, jonathanm said: If /mnt/user/cache exists, something is wrong already. There should NOT be a user share named cache. It should only exist as a disk share. Mind i take this to PM to discuss? Quote Link to comment
Bolagnaise Posted November 21, 2020 Share Posted November 21, 2020 doing some more investigation as the problem has not disappeared, it looks like mergerfs is maxing my cpu. Screenshot attached. This wasn’t present in beta 30. Not sure where to go from here. Quote Link to comment
DZMM Posted November 21, 2020 Author Share Posted November 21, 2020 1 hour ago, Bolagnaise said: doing some more investigation as the problem has not disappeared, it looks like mergerfs is maxing my cpu. Screenshot attached. This wasn’t present in beta 30. Not sure where to go from here. Can you file a report in the beta35 thread please. I don't know if it's related, but beta30 and beta35 cause my machine to completely freeze/crash and I have to do a hard reset. If anyone else is successfully using beta35 then please shout out! Quote Link to comment
axeman Posted November 21, 2020 Share Posted November 21, 2020 On 11/18/2020 at 10:50 AM, MowMdown said: That is strange, Im not sure why it would write to the _vfs upstream if you have your local path listed first in the union when writing to the union... for the record you can export /mnt/disks/some_dir as a share through SMB example: #unassigned_devices_start #Unassigned devices share includes include = /tmp/unassigned.devices/smb-settings.conf #unassigned_devices_end [some_dir] path = /mnt/disks/some_dir comment = browsable = yes # Public public = yes writeable = yes vfs object = You simply add this to the SMB extras under Settings > SMB Thanks - I will try that. Didn't know it was possible. Incidentally, do you know where /mnt/disks/some_dir is physically located? like does it go on Cache drive? or is it in memory/ram? Quote Link to comment
MowMdown Posted November 22, 2020 Share Posted November 22, 2020 It's in memory mounted as a tempfs iirc Quote Link to comment
axeman Posted November 23, 2020 Share Posted November 23, 2020 On 11/18/2020 at 10:50 AM, MowMdown said: That is strange, Im not sure why it would write to the _vfs upstream if you have your local path listed first in the union when writing to the union... for the record you can export /mnt/disks/some_dir as a share through SMB example: #unassigned_devices_start #Unassigned devices share includes include = /tmp/unassigned.devices/smb-settings.conf #unassigned_devices_end [some_dir] path = /mnt/disks/some_dir comment = browsable = yes # Public public = yes writeable = yes vfs object = You simply add this to the SMB extras under Settings > SMB So even with this - Windows cannot seem to write to the share via SMB. Googled it, and seems like sharing an rclone vfs mount can work with some modifications. Don't know that I have the know how to do that. I'm grateful for your time - going through it like this with me. I may have to go back to merger fs. Quote Link to comment
Bolagnaise Posted November 23, 2020 Share Posted November 23, 2020 On 11/21/2020 at 6:29 PM, DZMM said: Can you file a report in the beta35 thread please. I don't know if it's related, but beta30 and beta35 cause my machine to completely freeze/crash and I have to do a hard reset. If anyone else is successfully using beta35 then please shout out! Yep done, still trying to track down the issue but cpu usage has dropped significantly after doing a complete power cycle instead of a reboot. I still see occasional 100% spikes that weren’t prevalent in beta30 so i’m not sure. Anyway, can you share your recommendations for the ‘use cache pool option’ for the mount_rclone, mount_unionfs and rclone_upload folder. I have updated the script you gave me to include the vfs cache option and its running well, just want to make sure its fully optimised but its working well. Does changing the ‘rclonemaxcachesize’ automatically reduce the cache, or do you need to unmount/reboot to reduce size? I currently have it set to 400G as per your script but i’m considering buying another dedicated 2TB NVME drive just for vfs cache. Quote Link to comment
DZMM Posted November 23, 2020 Author Share Posted November 23, 2020 3 hours ago, Bolagnaise said: Anyway, can you share your recommendations for the ‘use cache pool option’ for the mount_rclone, mount_unionfs and rclone_upload folder. I have updated the script you gave me to include the vfs cache option and its running well, just want to make sure its fully optimised but its working well. - mount_rclone and mount_mergerfs are virtual folder so it doesn't matter. I've set mine to 'no' though - /local - user choice if want to use a faster cache or pool drive, or use the array. I've set mine to 'no' as I don't need fast access and files don't tend to hang around long before being uploaded. I do use a separate /downloads for my nzbget intermediate files that are saved on a pool drive, with complete files moved to the array 3 hours ago, Bolagnaise said: Does changing the ‘rclonemaxcachesize’ automatically reduce the cache, or do you need to unmount/reboot to reduce size? I currently have it set to 400G as per your script but i’m considering buying another dedicated 2TB NVME drive just for vfs cache. You need to remount as the size is set when you do the mount. I've gone for 400GB as that works well for me with the size of my array, as I've got about 7 mounts so it's 7x400=2.8TB of cached files in total out of my 16TB total storage. My two array drives are spun up pretty much 24x7 and I don't have a parity drive to slow them down, so I don't think I would benefit from an SSD or NVME for the rclone cache. Remember these files are separate to the plex meta files which are small and numerous, so benefit from a fast drive. If I were you, I'd just use a normal HDD outside of your array so your parity setup doesn't slow the drive down. Quote Link to comment
privateer Posted November 23, 2020 Share Posted November 23, 2020 (edited) Up until recently I've been using the OG scripts (no team drive, unionfs etc). I've migrated over to the new way of doing things and had some difficulty, I cobbled together something that worked for a small specific use, but when I start from scratch using the scripts I can't get my whole gdrive mounted. Inside mount_rclone I have /cache/ and /gdrive_media_vfs/. However, inside the gdrive folder I only have a mountcheck and one of the many folders I have in my gdrive. I'm sure I've just made an error somewhere, but I migrated my data from my drive to the team drive, and I'm unable to see those folders and files. Any thoughts? EDIT: I migrated files from "My Drive" to the Team Drive section and those files aren't showing up when I use rclone lsd gdrive_media_vfs. I only see an existing folder I had in the team drive and all the data/folders inside. I migrated using the move folder command on gdrive's web interface. When I use rclone gdrive lsd (unencrypted) it shows 4 encrypted folders...anyone know what's happening or could be happening to those 3 folders? Edited November 23, 2020 by privateer Quote Link to comment
Bolagnaise Posted November 23, 2020 Share Posted November 23, 2020 9 hours ago, DZMM said: - /local - user choice if want to use a faster cache or pool drive, or use the array. I've set mine to 'no' as I don't need fast access and files don't tend to hang around long before being uploaded. I do use a separate /downloads for my nzbget intermediate files that are saved on a pool drive, with complete files moved to the array If you remember i moved from the old script so my local is my rclone_upload folder. Heres my current script for reference. #!/bin/bash ###################### #### Mount Script #### ###################### #!/bin/bash ###################### #### Mount Script #### ###################### ## Version 0.96.9.2 ## ###################### ####### EDIT ONLY THESE SETTINGS ####### # INSTRUCTIONS # 1. Change the name of the rclone remote and shares to match your setup # 2. NOTE: enter RcloneRemoteName WITHOUT ':' # 3. Optional: include custom command and bind mount settings # 4. Optional: include extra folders in mergerfs mount # REQUIRED SETTINGS RcloneRemoteName="gdrive_vfs" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data RcloneMountShare="/mnt/user/mount_rclone" # where your rclone remote will be located without trailing slash e.g. /mnt/user/mount_rclone RcloneMountDirCacheTime="720h" # rclone dir cache time LocalFilesShare="/mnt/user/rclone_upload" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable RcloneCacheShare="/mnt/user/mount_rclone" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone RcloneCacheMaxSize="400G" # Maximum size of rclone cache RcloneCacheMaxAge="336h" # Maximum age of cache files MergerfsMountShare="/mnt/user/mount_unionfs" # location without trailing slash e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable DockerStart="plex sonarr sonarr4K radarr radarr4K" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page MountFolders=\ {"downloads/completed,downloads/intermediate,downloads/seeds,Movies,TV Shows,4KMovies,4kTVShows"\} # comma separated list of folders to create within the mount # Note: Again - remember to NOT use ':' in your remote name above # OPTIONAL SETTINGS # Add extra paths to mergerfs mount in addition to LocalFilesShare LocalFilesShare2="ignore" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder. Enter 'ignore' to disable LocalFilesShare3="ignore" LocalFilesShare4="ignore" # Add extra commands or filters Command1="--rc" Command2="" Command3="" Command4="" Command5="" Command6="" Command7="" Command8="" CreateBindMount="N" # Y/N. Choose whether to bind traffic to a particular network adapter RCloneMountIP="192.168.1.252" # My unraid IP is 172.30.12.2 so I create another similar IP address NetworkAdapter="eth0" # choose your network adapter. eth0 recommended VirtualIPNumber="2" # creates eth0:x e.g. eth0:1. I create a unique virtual IP addresses for each mount & upload so I can monitor and traffic shape for each of them ####### END SETTINGS ####### ############################################################################### ##### DO NOT EDIT ANYTHING BELOW UNLESS YOU KNOW WHAT YOU ARE DOING ####### ############################################################################### ####### Preparing mount location variables ####### RcloneMountLocation="$RcloneMountShare/$RcloneRemoteName" # Location for rclone mount LocalFilesLocation="$LocalFilesShare/$RcloneRemoteName" # Location for local files to be merged with rclone mount MergerFSMountLocation="$MergerfsMountShare/$RcloneRemoteName" # Rclone data folder location ####### create directories for rclone mount and mergerfs mounts ####### mkdir -p /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName # for script files mkdir -p $RcloneCacheShare/cache/$RcloneRemoteName # for cache files if [[ $LocalFilesShare == 'ignore' ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Not creating local folders as requested." else echo "$(date "+%d.%m.%Y %T") INFO: Creating local folders." eval mkdir -p $LocalFilesLocation/"$MountFolders" fi mkdir -p $RcloneMountLocation if [[ $MergerfsMountShare == 'ignore' ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Not creating MergerFS folders as requested." else echo "$(date "+%d.%m.%Y %T") INFO: Creating MergerFS folders." mkdir -p $MergerFSMountLocation fi ####### Check if script is already running ####### echo "$(date "+%d.%m.%Y %T") INFO: *** Starting mount of remote ${RcloneRemoteName}" echo "$(date "+%d.%m.%Y %T") INFO: Checking if this script is already running." if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Exiting script as already running." exit else echo "$(date "+%d.%m.%Y %T") INFO: Script not running - proceeding." touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running fi ####### Checking have connectivity ####### echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if online" ping -q -c2 google.com > /dev/null # -q quiet, -c number of pings to perform if [ $? -eq 0 ]; then # ping returns exit status 0 if successful echo "$(date "+%d.%m.%Y %T") PASSED: *** Internet online" else echo "$(date "+%d.%m.%Y %T") FAIL: *** No connectivity. Will try again on next run" rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running exit fi ####### Create Rclone Mount ####### # Check If Rclone Mount Already Created if [[ -f "$RcloneMountLocation/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Success ${RcloneRemoteName} remote is already mounted." else echo "$(date "+%d.%m.%Y %T") INFO: Mount not running. Will now mount ${RcloneRemoteName} remote." # Creating mountcheck file in case it doesn't already exist echo "$(date "+%d.%m.%Y %T") INFO: Recreating mountcheck file for ${RcloneRemoteName} remote." touch mountcheck rclone copy mountcheck $RcloneRemoteName: -vv --no-traverse # Check bind option if [[ $CreateBindMount == 'Y' ]]; then echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}" ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform if [ $? -eq 0 ]; then # ping returns exit status 0 if successful echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}" else echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for remote ${RcloneRemoteName}" ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber fi echo "$(date "+%d.%m.%Y %T") INFO: *** Created bind mount ${RCloneMountIP} for remote ${RcloneRemoteName}" else RCloneMountIP="" echo "$(date "+%d.%m.%Y %T") INFO: *** Creating mount for remote ${RcloneRemoteName}" fi # create rclone mount rclone mount \ $Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \ --allow-other \ --dir-cache-time $RcloneMountDirCacheTime \ --log-level INFO \ --poll-interval 15s \ --cache-dir=$RcloneCacheShare/cache/$RcloneRemoteName \ --vfs-cache-mode full \ --vfs-cache-max-size $RcloneCacheMaxSize \ --vfs-cache-max-age $RcloneCacheMaxAge \ --bind=$RCloneMountIP \ $RcloneRemoteName: $RcloneMountLocation & # Check if Mount Successful echo "$(date "+%d.%m.%Y %T") INFO: sleeping for 5 seconds" # slight pause to give mount time to finalise sleep 5 echo "$(date "+%d.%m.%Y %T") INFO: continuing..." if [[ -f "$RcloneMountLocation/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Successful mount of ${RcloneRemoteName} mount." else echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mount failed - please check for problems. Stopping dockers" docker stop $DockerStart rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running exit fi fi ####### Start MergerFS Mount ####### if [[ $MergerfsMountShare == 'ignore' ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Not creating mergerfs mount as requested." else if [[ -f "$MergerFSMountLocation/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount in place." else # check if mergerfs already installed if [[ -f "/bin/mergerfs" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs already installed, proceeding to create mergerfs mount" else # Build mergerfs binary echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs not installed - installing now." mkdir -p /mnt/user/appdata/other/rclone/mergerfs docker run -v /mnt/user/appdata/other/rclone/mergerfs:/build --rm trapexit/mergerfs-static-build mv /mnt/user/appdata/other/rclone/mergerfs/mergerfs /bin # check if mergerfs install successful echo "$(date "+%d.%m.%Y %T") INFO: *sleeping for 5 seconds" sleep 5 if [[ -f "/bin/mergerfs" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs installed successfully, proceeding to create mergerfs mount." else echo "$(date "+%d.%m.%Y %T") ERROR: Mergerfs not installed successfully. Please check for errors. Exiting." rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running exit fi fi # Create mergerfs mount echo "$(date "+%d.%m.%Y %T") INFO: Creating ${RcloneRemoteName} mergerfs mount." # Extra Mergerfs folders if [[ $LocalFilesShare2 != 'ignore' ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare2} to ${RcloneRemoteName} mergerfs mount." LocalFilesShare2=":$LocalFilesShare2" else LocalFilesShare2="" fi if [[ $LocalFilesShare3 != 'ignore' ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare3} to ${RcloneRemoteName} mergerfs mount." LocalFilesShare3=":$LocalFilesShare3" else LocalFilesShare3="" fi if [[ $LocalFilesShare4 != 'ignore' ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare4} to ${RcloneRemoteName} mergerfs mount." LocalFilesShare4=":$LocalFilesShare4" else LocalFilesShare4="" fi # make sure mergerfs mount point is empty mv $MergerFSMountLocation $LocalFilesLocation mkdir -p $MergerFSMountLocation # mergerfs mount command mergerfs $LocalFilesLocation:$RcloneMountLocation$LocalFilesShare2$LocalFilesShare3$LocalFilesShare4 $MergerFSMountLocation -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true # check if mergerfs mount successful echo "$(date "+%d.%m.%Y %T") INFO: Checking if ${RcloneRemoteName} mergerfs mount created." if [[ -f "$MergerFSMountLocation/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount created." else echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mergerfs mount failed. Stopping dockers." docker stop $DockerStart rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running exit fi fi fi ####### Starting Dockers That Need Mergerfs Mount To Work Properly ####### # only start dockers once if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/dockers_started" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: dockers already started." else # Check CA Appdata plugin not backing up or restoring if [ -f "/tmp/ca.backup2/tempFiles/backupInProgress" ] || [ -f "/tmp/ca.backup2/tempFiles/restoreInProgress" ] ; then echo "$(date "+%d.%m.%Y %T") INFO: Appdata Backup plugin running - not starting dockers." else touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/dockers_started echo "$(date "+%d.%m.%Y %T") INFO: Starting dockers." docker start $DockerStart fi fi rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running echo "$(date "+%d.%m.%Y %T") INFO: Script complete" exit Quote Link to comment
privateer Posted November 24, 2020 Share Posted November 24, 2020 Thanks - I don't see anything there I missed. I think the issue was when I migrated my data from the My Drive to the Team Drive. I used rclone lsd to look through my old mounts and new mounts, and it looks like the issue might be when I moved the data using the gdrive web ui. Is there a "right" way to move data from the original location ("My Drive") to the new location (Team Drive)? Quote Link to comment
francrouge Posted November 24, 2020 Share Posted November 24, 2020 Quick question for you guys. With Google changing the for workspace are we going to lose all our storage ? thx Quote Link to comment
chefmoisas Posted November 24, 2020 Share Posted November 24, 2020 Hi guys I use this guide to install rclone and after i start the mount plugin i can't stop it anyway, even if I start the unmount plugin. If i try to stop the array I get this message "Array Stopping•Retry unmounting user share(s)...". Anyone with a solution ? Many thanks Quote Link to comment
axeman Posted November 24, 2020 Share Posted November 24, 2020 9 minutes ago, chefmoisas said: Hi guys I use this guide to install rclone and after i start the mount plugin i can't stop it anyway, even if I start the unmount plugin. If i try to stop the array I get this message "Array Stopping•Retry unmounting user share(s)...". Anyone with a solution ? Many thanks This is a bit crude - but i run this in terminal: ps -ef | grep /mnt/user for each pid that comes up, i do this: kill <pid> Someone can probably script a better way to do it and run at array stop, but for now that's what i've been doing. Quote Link to comment
chefmoisas Posted November 24, 2020 Share Posted November 24, 2020 Now I fuc**** up, I run this in the console Quote #!/bin/bash logger "/usr/local/sbin/powerdown has been deprecated" if [[ "$1" == "-r" ]]; then /sbin/reboot else /sbin/poweroff fi The server shut down , now i start it i can ping it but the web interface is not working Any Ideas? Quote Link to comment
chefmoisas Posted November 24, 2020 Share Posted November 24, 2020 43 minutes ago, axeman said: This is a bit crude - but i run this in terminal: ps -ef | grep /mnt/user for each pid that comes up, i do this: kill <pid> Someone can probably script a better way to do it and run at array stop, but for now that's what i've been doing. There is a pid that i can't stop Quote root 32265 30343 0 14:45 pts/0 00:00:00 grep /mnt/user Quote Link to comment
axeman Posted November 25, 2020 Share Posted November 25, 2020 1 hour ago, chefmoisas said: There is a pid that i can't stop That's just from your command - you should see that your array has now stopped. Quote Link to comment
chefmoisas Posted November 25, 2020 Share Posted November 25, 2020 22 hours ago, axeman said: That's just from your command - you should see that your array has now stopped. You Are right Many Thanks Quote Link to comment
chefmoisas Posted November 25, 2020 Share Posted November 25, 2020 Hi Guys After i start the rclone_mount, I start the rclone_upload and I get this log . The rclone is not instaled but the gdrive mounts are working Quote Script Starting Nov 25, 2020 15:41.24 Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_upload/log.txt 25.11.2020 15:41:24 INFO: *** Rclone move selected. Files will be moved from /mnt/user/Media/gdrive for gdrive *** 25.11.2020 15:41:24 INFO: *** Starting rclone_upload script for gdrive *** 25.11.2020 15:41:24 INFO: Script not running - proceeding. 25.11.2020 15:41:24 INFO: Checking if rclone installed successfully. 25.11.2020 15:41:25 INFO: rclone not installed - will try again later. Script Finished Nov 25, 2020 15:41.25 Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_upload/log.txt Any Solution ? Many thanks Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.