Jump to content

DZMM

Members
  • Posts

    2,801
  • Joined

  • Last visited

  • Days Won

    9

Everything posted by DZMM

  1. Stay safe @TechMed and thanks for working so hard through the crisis.
  2. Good work everyone - we're up to #69 on folding@home over the last 24 hours. We should make top 50 by tomorrow. https://folding.extremeoverclocking.com/team_list.php?s=&srt=3 Update: #38 on points today! https://folding.extremeoverclocking.com/team_list.php?s=&srt=5
  3. I'm assuming you've setup dynamic dns in pfsense to update duckdns with your ip address for the domains and sub-domains you are using?
  4. Is there a way to way to write a script to schedule how many cores are used? I can assign a few more cores while my kids are at school and don't need their VMs. Thanks for offering an easy way to support the fight.
  5. The unmount script doesn't have any fusermount commands, as the new script structure makes this difficult (mount locations are variable). The script is intended to be a cleanup script to be run at array start. Do you need it to run at array stop? If so, just add your own fusermount commands to the script.
  6. It works. I have an asymmetric 360/180 connection (moved before xmas and lost my 1G symmetric connection 😞 ), so I tend to have more than 750GB pending upload - also because I use bwlimits to make sure I've got some spare upload left, even though I use traffic shaping on my pfsense VM. It stops once any transfers that started before 750GB hits have finished, and then resumes for the next run with a new SA.
  7. I've managed to get the WebUI working, but it keeps saying 'Could not get an assignment'. Any ideas what's wrong? I've opened all ports on pfsense 08:48:01:************************* Folding@home Client ************************* 08:48:01: Website: https://foldingathome.org/ 08:48:01: Copyright: (c) 2009-2018 foldingathome.org 08:48:01: Author: Joseph Coffland <[email protected]> 08:48:01: Args: --config /config/config.xml 08:48:01: Config: /config/config.xml 08:48:01:******************************** Build ******************************** 08:48:01: Version: 7.5.1 08:48:01: Date: May 11 2018 08:48:01: Time: 19:59:04 08:48:01: Repository: Git 08:48:01: Revision: 4705bf53c635f88b8fe85af7675557e15d491ff0 08:48:01: Branch: master 08:48:01: Compiler: GNU 6.3.0 20170516 08:48:01: Options: -std=gnu++98 -O3 -funroll-loops 08:48:01: Platform: linux2 4.14.0-3-amd64 08:48:01: Bits: 64 08:48:01: Mode: Release 08:48:01:******************************* System ******************************** 08:48:01: CPU: AMD Ryzen Threadripper 2950X 16-Core Processor 08:48:01: CPU ID: AuthenticAMD Family 23 Model 8 Stepping 2 08:48:01: CPUs: 32 08:48:01: Memory: 125.85GiB 08:48:01:Free Memory: 987.81MiB 08:48:01: Threads: POSIX_THREADS 08:48:01: OS Version: 4.19 08:48:01:Has Battery: false 08:48:01: On Battery: false 08:48:01: UTC Offset: 0 08:48:01: PID: 27 08:48:01: CWD: /config 08:48:01: OS: Linux 4.19.98-Unraid x86_64 08:48:01: OS Arch: AMD64 08:48:01: GPUs: 0 08:48:01: CUDA: Not detected: cuInit() returned 100 08:48:01: OpenCL: Not detected: clGetPlatformIDs() returned -1001 08:48:01:*********************************************************************** 08:48:01:<config> 08:48:01: <!-- Client Control --> 08:48:01: <fold-anon v='true'/> : 08:48:01: <!-- Folding Slot Configuration --> 08:48:01: <gpu v='false'/> : 08:48:01: <!-- HTTP Server --> 08:48:01: <allow v='192.168.30.0/24'/> : 08:48:01: <!-- Remote Command Server --> 08:48:01: <password v='********'/> : 08:48:01: <!-- Slot Control --> 08:48:01: <power v='FULL'/> : 08:48:01: <!-- User Information --> 08:48:01: <team v='227802'/> 08:48:01: <user v='DZMM'/> : 08:48:01: <!-- Web Server --> 08:48:01: <web-allow v='192.168.30.1/24'/> : 08:48:01: <!-- Folding Slots --> 08:48:01: <slot id='0' type='CPU'/> 08:48:01:</config> 08:48:01:Trying to access database... 08:48:01:Successfully acquired database lock 08:48:01:Enabled folding slot 00: READY cpu:32 08:48:01:WU00:FS00:Connecting to 65.254.110.245:8080 [93m08:48:02:WARNING:WU00:FS00:Failed to get assignment from '65.254.110.245:8080': No WUs available for this configuration[0m 08:48:02:WU00:FS00:Connecting to 18.218.241.186:80 [93m08:48:03:WARNING:WU00:FS00:Failed to get assignment from '18.218.241.186:80': No WUs available for this configuration[0m [91m08:48:03:ERROR:WU00:FS00:Exception: Could not get an assignment[0m 08:48:04:WU00:FS00:Connecting to 65.254.110.245:8080 [93m08:48:04:WARNING:WU00:FS00:Failed to get assignment from '65.254.110.245:8080': No WUs available for this configuration[0m 08:48:04:WU00:FS00:Connecting to 18.218.241.186:80 [93m08:48:05:WARNING:WU00:FS00:Failed to get assignment from '18.218.241.186:80': No WUs available for this configuration[0m [91m08:48:05:ERROR:WU00:FS00:Exception: Could not get an assignment[0m 08:49:04:WU00:FS00:Connecting to 65.254.110.245:8080 08:49:04:WU00:FS00:Connecting to 65.254.110.245:8080 [93m08:49:04:WARNING:WU00:FS00:Failed to get assignment from '65.254.110.245:8080': No WUs available for this configuration[0m 08:49:04:WU00:FS00:Connecting to 18.218.241.186:80 [93m08:49:05:WARNING:WU00:FS00:Failed to get assignment from '18.218.241.186:80': No WUs available for this configuration[0m [91m08:49:05:ERROR:WU00:FS00:Exception: Could not get an assignment[0m
  8. It's been a while but I didn't do anything clever - I just followed these instructions https://github.com/xyou365/AutoRclone/blob/master/Readme.md. somehow I ended up with 500 not 100 though
  9. Not sure what that is - did you follow the guide for creating SAs? Easiest way is one script per mount.
  10. I just add to rclone config - since I learnt that you can copy passwords and somehow the encryption still works, I do almost all my config stuff directly in the file.
  11. No need for new project if the SA group has been added to the respective teamdrives - think of SAs as normal accounts, that don't need credentials/client_ids setting up i.e. bans work the same - on the offending SA. They're good for efficiently handling multiple accounts for rotating etc once they are setup
  12. ok, I see where your confusion is coming from Or, like this if using service accounts: [gdrive] type = drive scope = drive service_account_file = /mnt/user/appdata/other/rclone/service_accounts/sa_gdrive.json team_drive = TEAM DRIVE ID server_side_across_configs = true [gdrive_media_vfs] type = crypt remote = gdrive:crypt filename_encryption = standard directory_name_encryption = true password = PASSWORD1 password2 = PASSWORD2 Fixed the readme - glad someone is reading it!
  13. you don't need to use client_id if you're using service accounts.
  14. Can you give an example of your symlink command please. I do a similar thing with my UD drives, but I use bind mounts e.g. I have my plex metadata on a UD 970 evo plus: mount --bind "/mnt/disks/ud_970evoplus/appdata/dockers/plex" "/mnt/cache/appdata/dockers/plex" Also, I think you should do a seperate post when you have time of all the TR changes you've made as there's a lot of useful stuff in this thread.
  15. @Kaizac try the github read.me https://github.com/BinsonBuzz/unraid_rclone_mount
  16. In the script. Mount your other tdrives as normal, but enter 'ignore' for the mergerfs location, so that you don't get a corresponding mergerfs mount. Then for the merged mount add the extra locations: # OPTIONAL SETTINGS # Add extra paths to mergerfs mount in addition to LocalFilesShare LocalFilesShare2="/mnt/user/mount_rclone/gdrive_media_vfs" LocalFilesShare3="/mnt/user/mount_rclone/backup_vfs" LocalFilesShare4="ignore" Above you can see I've added my music and backup teamdrives to my main plex teamdrive. Then run your upload as usual against this mergerfs mount. I worked out what the goobledygook name was in my crypts e.g. if 'music' is crazy_folder_name, it should match up in both teamdrives if you've used the same passwords. You have to use the encrypted remotes to do all server side: rclone move tdrive:crypt/crazy_folder_name gdrive:crypt/crazy_folder_name --user-agent="transfer" -vv --buffer-size 512M --drive-chunk-size 512M --tpslimit 8 --checkers 8 --transfers 4 --order-by modtime,ascending --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude .Recycle.Bin/** --exclude *.backup~* --exclude *.partial~* --drive-stop-on-upload-limit --delete-empty-src-dirs I probably don't need all the options, just copied from the main script. Transfer flies at an insane speed and is over in seconds. You do need --drive-stop-on-upload-limit as it abides by the 750GB/day limit. If you need to move more daily, then rotate service accounts by just repeating the command: rclone move tdrive:crypt/crazy_folder_name gdrive:crypt/crazy_folder_name --drive-service-account-file=$ServiceAccountDirectory/SA1.json rclone move tdrive:crypt/crazy_folder_name gdrive:crypt/crazy_folder_name --drive-service-account-file=$ServiceAccountDirectory/SA2.json rclone move tdrive:crypt/crazy_folder_name gdrive:crypt/crazy_folder_name --drive-service-account-file=$ServiceAccountDirectory/SA3.json etc etc Add as many as you need.
  17. You only need 1 project. SAs are associated with your google account, so they can be shared between teamdrives if you want to. Of the 500 or so I created, I assign 16 to each upload script (sa_tdrive1.json ----> sa_tdrive16.json, sa_cloud1.json ----> sa_cloud16.json etc etc) - don't need that many, but means I've got enough to saturate a gigabit line if I need to. All you have to do is rename the file, so you might as well assign 16 to each script. If you want to reduce the number of scripts, you could do what I've done: 1. I've added the additional rclone mounts as extra mergerfs locations, so that I only have one master mergerfs share for say teamdrive1, td2 etc etc - saves a bit of ram 2. I have one upload moving local files to teamdrive1 - saves a bit of ram and easier to manage bandwidth 3. overnight I do a server side move from td1-->td2, td1-->td3 etc etc for the relevant folders - limited ram and no bandwidth hit as done server-side 4. all files still accessible to mergerfs share in #1 - files are just picked up from their respective rclone mounts, rather than local or the td1 mount
  18. Thanks @trapexit. The worst case of both drives having less than 100GB shouldn't happen as I've got a few TBs free on the slower array 'drive'.
  19. Thanks for spotting that - changed: mergerfs /mnt/disks/ud_mx500/local:/mnt/user/local /mnt/user/mount_mergerfs/local -o rw,async_read=true,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true,moveonenospc=true,minfreespace=100G While you're here, can I check the command above is doing what I want please. I want files to go to my mx500 SSD as long as there's 100GB free, if not write to the array - /mnt/user/local. Have I got it right? #4 on your github confused me a bit about setting minfreespace to the size of the largest cache drive
  20. No - I'm on 6.8.2 DVB. I changed my mergerfs command to: mergerfs /mnt/disks/ud_mx500/local:/mnt/user/local /mnt/user/mount_mergerfs/local -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true,moveonenospc=true,minfreespace=100G and retested. Copy is now going to SSD as desired and now I get: 2. Array-2-SSD (Disk 2--> MX500): 184MB/s 3. Array-2-Mergerfs (Disk 2--> MX500): 182MB/s I think this was a better test than before as my Disk 1 is doing lots of other things, which I think is why there was such a slow down before. Maybe something's changed in 6.8.3??
×
×
  • Create New...