[Plugin] rclone


Waseh

Recommended Posts

On 10/31/2019 at 8:47 PM, theDrell said:

I am having a weird problem.  So I have RClone Beta setup on Unraid 6.7.2.  It was working great.  I went and restarted my machine and when rclone tries to mount the drive I get  


Failed to create file system for "gdrive:": couldn't find root directory ID: googleapi: Error 404: File not found:

I downloaded my Rclone.conf file, and ran it on my windows machine. Everything mounts just fine using the same file.

My mount script looks like this:

 


mkdir -p /mnt/disks/gdrive

rclone mount --max-read-ahead 1024k --allow-other gdrive: /mnt/disks/gdrive &

 

Under my rclone-beta config I have the following, with secrets and tokens removed

 


[gdrive]
type = drive
client_id =CODE.apps.googleusercontent.com
client_secret = SECRET
scope = drive.file
token = {"access_token":"TOKEN","token_type":"Bearer","refresh_token":"1//SOMETHING","expiry":"2019-10-31T10:12:03.849383085-05:00"}

 

 

Anyone have any idea?
 

So if I go through and reauthorize my gdrive again, will that break anything?  Windows rclone using the same rclone.conf works fine, just the unraid rclone-beta seems to not work anymore

Link to comment
27 minutes ago, theDrell said:

So if I go through and reauthorize my gdrive again, will that break anything?  Windows rclone using the same rclone.conf works fine, just the unraid rclone-beta seems to not work anymore

Reauthorize won't break anything...it just changes that access token field in your rclone.conf.

 

But since you said your rclone.conf file works fine on your Windows rclone.......but not on Unraid...I'd say it's something with your Unraid rclone installation. Did you try removing rclone-beta plugin, restarting server, reinstalling rclone-beta plugin?

Link to comment
2 hours ago, Stupifier said:

Reauthorize won't break anything...it just changes that access token field in your rclone.conf.

 

But since you said your rclone.conf file works fine on your Windows rclone.......but not on Unraid...I'd say it's something with your Unraid rclone installation. Did you try removing rclone-beta plugin, restarting server, reinstalling rclone-beta plugin?

No I haven't tried to reinstall it yet.  I have tried restarting the server.  I will try to reinstall the rclone-beta tonight and see if that gets me anywhere.

Link to comment
9 hours ago, Stupifier said:

Reauthorize won't break anything...it just changes that access token field in your rclone.conf.

 

But since you said your rclone.conf file works fine on your Windows rclone.......but not on Unraid...I'd say it's something with your Unraid rclone installation. Did you try removing rclone-beta plugin, restarting server, reinstalling rclone-beta plugin?

So I tried reinstalling the plugin.  Nothing.. Even tried reinstalling the regular rclone one, nothing.  Same error  Failed to create file system for "gdrive:": couldn't find root directory ID: googleapi: Error 404: File not found: QWE12345A.....

 

Tried to refresh the token, didn't work.  I also have a secure one that doesn't work, so neither of them work.  They worked great right after I initially set them up.  

Link to comment
On 11/5/2019 at 4:50 AM, theDrell said:

So I tried reinstalling the plugin.  Nothing.. Even tried reinstalling the regular rclone one, nothing.  Same error  Failed to create file system for "gdrive:": couldn't find root directory ID: googleapi: Error 404: File not found: QWE12345A.....

 

Tried to refresh the token, didn't work.  I also have a secure one that doesn't work, so neither of them work.  They worked great right after I initially set them up.  

Well this is very odd. Im seeing this problem on my VPS which runs straight up Debian but i have no problems on my unraid server.
The difference is the VPS is connected to my private Google drive and my unraid server i connected to my gsuite account.

According to my logs the VPS stopped working October 27, which coincides with the release of rclone v1.50.0

Link to comment
2 hours ago, Kaizac said:

Woke up this morning to a not functioning server. Running rclone-beta. Suddenly my whole rclone config is erased. So make sure you have your configs backed up.

Have you looked on your flash drive to see if the config I still there or did you run rclone config to check?

Link to comment
1 hour ago, Waseh said:

Have you looked on your flash drive to see if the config I still there or did you run rclone config to check?

Yes, checked that but it's an empty config. CA restore also didn't fix it, so I'm now troubleshooting. Really don't feel like manually restoring 60 remotes....

Edited by Kaizac
Link to comment
8 minutes ago, Kaizac said:

Yes, checked that but it's an empty config. CA restore also didn't fix it, so I'm now troubleshooting. Really don't feel like manually restoring 60 remotes....

I have an hourly cronjob running that backs up my rclone config to a private repo.....

 

That way, I maintain a backup AND I can deploy my rclone setup on any system quickly (including service account json files).

Link to comment
5 minutes ago, Stupifier said:

I have an hourly cronjob running that backs up my rclone config to a private repo.....

 

That way, I maintain a backup AND I can deploy my rclone setup on any system quickly (including service account json files).

Yeah I do the same, cron job together with rclone for a local backup. So I just restored the config manually and rebooted the system and it's all working again.

Very strange behaviour and never had this happen before.

Link to comment
On 11/5/2019 at 4:50 AM, theDrell said:

So I tried reinstalling the plugin.  Nothing.. Even tried reinstalling the regular rclone one, nothing.  Same error  Failed to create file system for "gdrive:": couldn't find root directory ID: googleapi: Error 404: File not found: QWE12345A.....

 

Tried to refresh the token, didn't work.  I also have a secure one that doesn't work, so neither of them work.  They worked great right after I initially set them up.  

I downgraded the rclone version on my VPS to 1.49.5 and it's working again.
I'm going to open a topic on the rclone forum.

Edited by Waseh
Link to comment
22 minutes ago, theDrell said:

Any easy instructions to downgrade our rclone version on unraid?

Okay, I downloaded the prebuilt rclone and overwrote the one installed by the plugin, but then had to add 

--config=/boot/config/plugins/rclone/.rclone.conf 

to my rclone mount command, and my drives are back now until this gets fixed.

Link to comment
7 hours ago, theDrell said:

Okay, I downloaded the prebuilt rclone and overwrote the one installed by the plugin, but then had to add 


--config=/boot/config/plugins/rclone/.rclone.conf 

to my rclone mount command, and my drives are back now until this gets fixed.

ncw, the maintainer of rclone, has already made a patch that i confirmed fixes the problem.
Should get released soon.

  • Like 1
Link to comment
1 hour ago, ur6969 said:

I had the issue other have had over the last few days.  I think it is back working but my Rclone version is 1.50.1, is that correct or should it be 1.50.2?

 

How do I backup my config file?  I would like to use it on a different computer at work to pull down and sync. 

If you share your flash drive then its under flash/configs/plugins/rclone or rclone beta/.rclone.conf

Link to comment

I started my initial sync last night of ~150 GB and was surprised to see it still running when I got home tonight.  I ended up killing the process to stop the sync as I was unsure of when it might stop and I am limited to a data cap with my ISP.

 

An "rclone size" command reveals 106 GB but Backblaze B2 says the bucket has 172 GB in it.  The bucket was created exclusively for Rclone, is my only bucket, as was empty when I started the sync.

 

I tested multiple folders with small (< 10 GB) data amounts prior going all in on Rclone and everything went fine.  Backblaze wanted to retain files after deletion but I changed the settings in both Rclone and the bucket to only keep the last version and everything seemed correct.

 

What could I be missing to have such different totals?

Link to comment
  • 2 weeks later...
On 11/14/2019 at 12:48 AM, ur6969 said:

I started my initial sync last night of ~150 GB and was surprised to see it still running when I got home tonight.  I ended up killing the process to stop the sync as I was unsure of when it might stop and I am limited to a data cap with my ISP.

 

An "rclone size" command reveals 106 GB but Backblaze B2 says the bucket has 172 GB in it.  The bucket was created exclusively for Rclone, is my only bucket, as was empty when I started the sync.

 

I tested multiple folders with small (< 10 GB) data amounts prior going all in on Rclone and everything went fine.  Backblaze wanted to retain files after deletion but I changed the settings in both Rclone and the bucket to only keep the last version and everything seemed correct.

 

What could I be missing to have such different totals?

This seems to have worked itself out.

 

1.  I manually started the Rclone script when I had it ready and then it appears to have started again when the cron job in User Scripts triggered on top of itself resulting in multiple copies of many files and the extreme data usage.  After killing the processes and then allowing it to fire up again with User Scripts it resolved itself and completed.

 

2.  Backblaze took 24hrs to fully remove the duplicate files to then get correct data amounts (at least I think so!).  The Backblaze total does not match the "rclone size" total or the Windows Explorer total.  Same amount of files and the GB's are close enough that it's got to be the difference in GB measuring.

Link to comment
  • 3 weeks later...

Hello 

I am unable to connect to my Jottacloud. with the latest rclone. I'm getting a error when i refresh the token...

 

Failed to get oauth token: illegasl base64 data at input byte 260.  It works on windows just not on unraid....

 

If there is no fix, ist there a way to rollback to an older version?

Link to comment
  • 2 weeks later...

Close to having rclone setup and runing but having a few issues....

 

I pretty much followed SpaceInvader's Video to the T. 

 

So I can transfer from /mnt/disks/secure --> /mnt/disks/gdrive fine.. and /mnt/disks/gdrive matches what's on my gdrive account...

 

my path: \MARS\secure-cloud matches what's in /mnt/disks/secure

 

the Issue is when I try and transfer to the mounted cloud path from my desktop. I can transfer from the mounted cloud path TO my desktop but not vice versa.

 

UD30REO.png

 

It stalls for a couple minutes in the transfer window, then says

 

12jaubN.png

 

Now if I transfer (from windows) Test.mp4 from my desktop --> gdrive (X:) and then Go on Krusader and transfer Test.mp4 from /mnt/disks/gdrive --> /mnt/disks/secure it works fine. Just an extra step involved is all. Like so: https://i.imgur.com/I7sZgQo.png

 

Googling 0x8007003B....:

 

- "It seems the problem is not the SMB, i have mounted now the share over NFS and i get the same error. "

- "You have Intel & Realtek NIC in bonding bridge, pls try remove Intel from the bridge and plug LAN cable in Realtek only."

- "This did solve it for me. I bonded the onboard Intel with a Realtek card. Both 1gb ports.  

I set them up with 802.3ad and made the adjustments on the unifi US8 switch.  All seemed good and I was pulling down multiple files from the server faster than before. 

Then tried to upload a 5gb file and the 0x0error started.  Found this thread, dropped the bonding and now I can copy up. "

 

- I have managed to solve this I think and for anyone else with the same issue, this seems to have been a setting in Windows 10 that needed to be disabled in order to stop this,

"netsh interface tcp set global chimney=disabled"

 

"I updated the virtIO network fixed my 0x8007003B error." I re-downloaded the virtio drivers iso and did a driver update and now seems to be working."

 

 

The only two things that are failing are: 

1. Transferring Test.mp4 from desktop to //MARS/Secure-Cloud
2. Transferring Test.mp4 from gdrive (X:) Mount to //MARS/Secure-cloud
 

Here's a screenshot of my setup now

mEQMPaf.png

 

 

Also when Mounting the disks drives... if they aren't empty the logs show this: 

 

2019/12/22 13:21:15 Fatal error: Directory is not empty: /mnt/disks/gdrive If you want to mount it anyway use: --allow-non-empty option
2019/12/22 13:21:16 Fatal error: Directory is not empty: /mnt/disks/secure If you want to mount it anyway use: --allow-non-empty option

 

But if you try and add that flag to the script, it produces another error since --allow-non-empty' currently doesn't support fusermount which doesn't support the FUSE option apparently. 

Edited by Supa
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.