Jump to content

Stupifier

Members
  • Content Count

    114
  • Joined

  • Last visited

  • Days Won

    1

Stupifier last won the day on May 11 2018

Stupifier had the most liked content!

Community Reputation

12 Good

About Stupifier

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. To get log file just include this in your rclone command --log-file /path/to/logfile/log_filename.txt.... Just like the documentation says..."string" just means the path and filename of wherever you want the log to output to. And don't forget to use a -v...... AND your when you use log file flag, your command info will ONLY go to the log file and not be displayed live in terminal window. If you run any command in terminal, the command will STOP if you close the terminal window.
  2. Look into the log output file flag in rclone documentation. Just log your output to a file and use tail -f terminal command to see what's been happening anytime you wanna check
  3. Glad I could help.......It was frustrating me too.
  4. I would like to know this too! Can someone please offer some suggestions (Sorry to necro this post but it is a good quesiton!)
  5. Ok, I used to be able to connect to Host network with this before the update....that allowed me to be assigned an IP on my WiFi subnet, which then allowed me to access the UnRAID GUI interface. NOW, instructions make us connect to Bridge network......so how do we access the UnRAID GUI interface if we are on the bridge network? OpenVPN dished me out a 172.27.xxx.xxx address (docker subnet). Update: Figured out how to access UnRAID GUI. Did NOT figure out how to be assigned a local address on my primary WiFi subnet though. In Admin Page ----> VPN Settings go to Routing section and add a line for the subnet you want your clients to have access to (for example, I added 192.168.1.0/24 which is my primary WiFi subnet and where I can access my UnRAID GUI locally)
  6. For whatever reason.....my 6.7.0 update Went just fine and I use Marvell SATA controller. Seems like the issue is hit or miss depending on the version/type of Marvell Controller you have.
  7. Regarding the Marvell Controller issues: I have Marvell controller and updated from 6.6.1 ----> 6.7.0 without any issue. Boots fine, Array Starts fine, posting this message on my Win10 VM. This is the Marvell Controller I use. IOMMU group 20: [1b4b:9172] 05:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9172 SATA 6Gb/s Controller (rev 11) IOMMU group 21: [1b4b:9172] 06:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9172 SATA 6Gb/s Controller (rev 11) IOMMU group 22: [1b4b:9172] 07:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9172 SATA 6Gb/s Controller (rev 11)
  8. If both Source and Destination locations are LOCAL (not remotes/cloud locations).....then colon and stuff before it is not needed in the path. Everything before the colon is in reference to the name of the remote (as you configured in "rclone config"). With that said, re-read my previous post with examples. Sorry, but I can't explain much further on this. It's best to just run commands, observe result, and learn that way.
  9. Examples below. I just threw these together in a couple minutes to show you. I don't use any of these. Sync will delete files on destination if they do NOT match source. If you change sync to copy.....then files in destination will NOT be deleted. This is all in the rclone documentation. I believe examples are there as well. this will sync a local folder on unraid (specifically, the share named "my_documents_share") to a remote named "gdrive" I setup using "rclone config". "gdrive" is a remote I setup that is pointing to my Google Drive account. --ignore existing just makes sure to skip over files that are already synced and -v gives verbose output so you can follow along with what's going on.. Also notice the remote: nomenclature is only required when the source or destination is an actual remote location (like cloud storage). rclone sync "/mnt/user/my_documents_share/" "gdrive:backup/my_documents_share/" --ignore-existing -v This will sync a folder in the remote "gdrive" to another remote "gdrive2". "gdrive2" is just another remote setup in "rclone config". It could point to a specific folder inside your Google Drive account, it could point to an entirely different Google Drive account altogether. Notice here, both source and destination are remotes.....This means we are just syncing data across two different cloud storage locations. In other words, nothing is being synced locally at all. Also, notice --bwlimit 5M just limits how much bandwidth this sync will consume....so you don't saturate your connection. Here I set it to 5 MB/sec. rclone sync "gdrive:backup/my_documents_share/" "gdrive2:backup/my_documents_share/" --ignore-existing -v --bwlimit 5M This is a simple command to just see the directories listed in the remote named "gdrive". Just an easy way to see what directories you have in there. You can use a depth flag (can't remember exact wording for it) if you want it to list folders deeper then the top-level. rclone lsd gdrive:
  10. Source: and Destination: are remotes......type "rclone config" in terminal Shell to see a list of your current remotes, edit, and add new ones. You're doing good if you are reading on rclone site. Learn about rclone copy,move,lsf,lsd, and more. Also look into the the various flags (especially --ignore-existing, -v, and others)
  11. You need to learn how to use the Terminal Shell. Again, writing to an rclone mount is NOT reliable. It does not matter if you write to the mount folder directly or via the union folder, it is NOT reliable. STOP EXPECTING IT TO BE. Instead, if you MUST write to you Google Drive, use rclone copy/sync commands via terminal shell. You're only limited to krusader because you are unwilling to LEARN. Put in the time, and learn how to use rclone properly. If you are unwilling to learn how to use Terminal Shell rclone commands, then you should just stop using rclone and seek alternatives. If you Google search, you can even find people who wrote scripts to monitor your union folder and automatically fire off terminal Shell rclone uploads for you. Cloudplow is one of those tools. But you have to be willing to learn how to use it
  12. You are writing data to your Google drive account all wrong.....That's why! rclone mounts are HORRIBLE to write to and really should ONLY he used to read data from. Instead, if you MUST write to you Google Drive, use rclone copy/sync commands via terminal shell
  13. I have a complete backup of libvert, xml, iso, and vdisk In Win10, I setup a user password I don't remember. I can't get in. But I have this backup I can restore from. The backup is from when I had NO PASSWORD. I bring in my backup vdisk1.img and boot up. Figured that's all I'd have to restore I'm hit with the password screen. Still can't get in. This makes no sense! Any ideas? Edit: I also have a backup of the nvram file UPDATE: Solved. Explanation below I've got the Win10VM on an unassigned device SSD. So the vdisk1.img for it is on there....AND....AND....it is located in a share (/mnt/user/domains/Win10). I originally restored my backup to the user share location...and that didn't work. I still hit the password screen in Win10. Then I tried restoring to the unassigned device SSD (in addition to the User Share location. That DID work.
  14. Thought I'd just mention......zsh is in the unraid NerdPack plugin......that is probably the easiest way to install zsh and have it persist through reboots. If you haven't used NerdPack plugin, I'd suggest looking into it. Makes installing various tools and have them persist extremely easy Edit: Apologies.....you referring to Oh-My-Zsh.....
  15. Posting to promote/upvote a feature-request: Would be nice to have the Appdata Backup separate the tar into individual tars for each app or primary folder instead of one gigantic tar.