• Content Count

  • Joined

  • Last visited

Everything posted by rragu

  1. Thanks! I lowered the checkers to 2 and transfers to 1. Combined with a chunk-size of 256M, I get the same ~80MBps with half the CPU utilization as before, even without --ignore checksum
  2. Just tried out "rclone copy"....the difference is night and day Test files: 4 files (total of 12.3 GB; between 2.3-3.6GB each) Average transfer speed using rclone mount: 19.4MB/s Average transfer speed using "rclone copy": 60.9MB/s Average transfer speed using "rclone copy" and chunk-size 256M: 78.1MB/s The only drawback is heightened CPU/RAM usage but I'm sure I can manage that with a script like you mentioned. Thanks very much for all your help!
  3. Thanks! I'll look into the resources you posted. As for not writing to the rclone Google Drive mount, (1) it's a slightly more widely known tip now 😅, (2) while I'll switch to using "rclone copy", is there any particular negative effect to transferring data to Google Drive in the way I've been doing (e.g. data loss/corruption) or is it just lower performance?
  4. Hi, I recently set up rclone with Google Drive as a backup destination using SpaceInvaderOne's guide. While archiving some files, I noticed that my files were being uploaded at around 20MBps despite having a gigabit FiOS connection. Based on some Googling, I'm thinking increasing my chunk size might improve speeds. But how do I go about increasing the chunk size? I've attached my rclone mount script if that's of any help. Also, how does this affect the items I have already uploaded (if it affects them at all)?
  5. Well it'll be a seven hour drive. Personally, I'm willing to completely waste an hour of my time to gain that bit of peace of mind (even if it might be illusory 🤷‍♂️). Besides, what with quarantining, each hour of my time is suddenly much less valuable... As for the heatsink, I use an AIO (probably also overkill for this use-case; but I had it left over from another build). I'm thinking that an AIO shouldn't need to be removed, as it's not a hunk of metal like a NH-D15 etc.?
  6. I'm planning to move my server from my parents' house to mine. So far I'm planning on: - running a backup via Duplicacy and the Backup/Restore Appdata plugin (I already do this daily and weekly respectively) - running a parity check before the move - noting which HDD is connected to which SATA port - removing the HDDs and expansion cards and packing them safely for the drive - reinstalling the components post-move in the same manner they were pre-move - running another parity check to ensure there was no damage to the HDDs as a result of the drive
  7. When you say "well-written", do you mean on the part of the container creator or the underlying service? For example, I generally prefer to use the Linuxserver variant of a given container. Presumably, those would count as well-written? Also, with regards to data loss, I imagine that depends on whether data is actively being written (e.g. my Nextcloud and Bookstack containers are usually NOT writing data whereas my telegraf container is constantly writing to InfluxDB)? In any case, since I'm certainly not knowledgeable enough to know if an app is well written, I suppose
  8. I have a few instances where Docker containers work in combination (e.g. Nextcloud and Bookstack each work with mariadb; Telegraf, InfluxDB and Grafana all work together etc). When there is a update for one or more of these Docker containers, is there a recommended way to update (e.g. stop everything, update everything, then restart in a specific order)? Or can I just update any individual container as and when I please without stopping any other "upstream" or "downstream" containers?
  9. My standard disclaimer: I only know enough to break things that I don't know how to fix... I've written my go file such that at boot, I get my array passphrase via AWS Secrets Manager and write it to /root/keyfile. unRAID then uses /root/keyfile to unlock/startup my array. I've been manually deleting my keyfile after startup. The aws-cli command I use for the procedure above retrieves a string, not a file. So, is it possible to use the output of this command as the passphrase rather than writing it to a keyfile first? Thanks!
  10. Wasn't really sure which sub-forum to post this in, but here goes: I setup the trio of Telegraf+InfluxDB+Grafana recently and noticed the following curious behavior from my GPU: I installed my GPU almost a year ago. At idle, the GPU usage was obviously 0% with fan usage ~50%. The GPU statistics plugin confirmed as much. However, since I set up Telegraf et al. a week or so ago, "idle" GPU usage hovers between 2-5% with fan usage at ~65% for extended periods. Upon checking nvidia-smi, it reports "no running processes". The issue also appears to go away on its o
  11. My standard disclaimer: I only know enough to break things that I don't know how to fix... Question 1: I've written my go file such that at boot, I get my array passphrase via AWS Secrets Manager and write it to /root/keyfile. unRAID then uses /root/keyfile to unlock/startup my array. I've been manually deleting my keyfile after startup. Can I just add the following to the go file to automatically delete the keyfile 5 minutes after startup: sleep 300s shred /root/keyfile Or should I just write a user script with the above commands via the User Scripts plugin
  12. Thanks! EDIT: Solved both issues as described in my post above.
  13. @saarg Thanks! uBlock Origin was the culprit. Apparently, it's not a fan of duckdns.org? I had planned to switch from duckdns to cloudflare-ddns anyway. After doing so, the site is working properly in Firefox with uBlock Origin still enabled.
  14. I recently set up Telegraf+InfluxDB+Grafana (+HDDTemp). In order to get Nvidia GPU stats, I changed my telegraf repository from alpine to latest. Life was good. However, I've just noticed that, in the Docker tab of unRAID, the Telegraf icon is missing and the docker container name is no longer a link to edit the template. If I click the docker icon, I only have the options of Console, Start, Stop, Pause, Restart, and Remove. My Grafana dashboard is still populating properly. So, Telegraf still appears to be doing its job. That said, can anyone help me figure
  15. I installed LSIO's code-server container and set up a reverse proxy using the LSIO LetsEncrypt container. My issue is: on Firefox, once I go to code.mydomain.com and log in with the password I set in the code-server container, I just get a blank page. However, the same site works perfectly fine on Chrome and Edge. I know I'm being kinda vague. But any ideas as to why this might be or what I should start checking for?
  16. Don't know if this is expected behavior: I just updated Docker Folder to v2020.05.03 (not that I was having any issues with the previous version); I then switched to the Docker tab to see all my folders were deleted. I'm now back to the original list of containers. Edited to add: While I was able to update containers, I cannot stop them (i.e. I clicked the container icon, clicked Stop and nothing happened). Uninstalled Docker Folder and I am able to start/stop containers again.
  17. Hi, I installed recordings-converter and tested it out on a couple of files. Conversion logs are attached. A few questions: 1) I think I've set it up correctly to use my GPU. Is the attached configuration correct? 2) While the docker was doing its work, I did see load on the GPU and an ffmpeg process. But there was still 5-10% load on the CPU (and no other docker should have been using that much CPU resources at the time). Is this expected behavior? 3) Does the container work on multiple files sequentially or concurrently? So far I've tested the container only
  18. Aha. Thanks! Another request: is this something that could be integrated into NerdPack?
  19. Could you please add back jq (assuming there are no problems with it on 6.8, as it seems it actually was part of NerdPack a while ago)? Thanks
  20. Hi, I believe that starting with 6.8, unRAID no longer saves a passphrase to a keyfile. So, does this mean the only way to autostart an encrypted array is to use a keyfile? Assuming there is some way to autostart using a passphrase: So I have my server at my parents' place since they have Gigabit internet and I don't. I generally use the OpenVPN docker to administer the server, although I do have a Raspberry Pi on their LAN that I connect to via VNC if OpenVPN isn't working properly. I was planning on storing the passphrase on the Raspberry Pi and having the server
  21. Ahh I see. That makes more sense. Yeah I think I'll just set Media to be cache-enabled. I had worried about unnecessary writes on the cache disk. But, I believe SSD drive endurance is high enough now that I'm more likely to replace the drive due to obsolescence before failure due to endurance
  22. Syncthing is set to cache: Yes Media is set to cache: No According to the tooltip in the Share Settings page, "No prohibits new files and subdirectories from being written onto the Cache disk/pool." So if (1) content is on the Syncthing share and the cache drive and (2) I then move it to the Media share using Krusader, my expectation is that content moves from Syncthing share on the cache drive to the Media share on the array (bypassing the cache drive since Media is NOT cache enabled). But that doesn't seem to happen. When I check the cache drive contents, th
  23. Running on 6.8.1 I have an unRAID server at my parents' house, mainly because they have gigabit FiOS and I don't (for the same price as my 200Mbps, I might add; but that's a rant for another day). I live in another city and I usually use Syncthing to add any content from my personal desktop to the unRAID server. I then use Krusader to move content from the Syncthing share to whatever share it needs to go into. I have noticed though that when I transfer content from the Syncthing share (which is cache-enabled) to, for example, my Media share (which is NOT c
  24. I just checked the LSIO GitHub reverse-proxy-conf repository (https://github.com/linuxserver/reverse-proxy-confs) and actually there is a 'calibre-web.subdomain.conf.sample' file here. 1) I assume I can just copy that file over into my LE container manually and continue as usual? 2) Maybe something to look into regarding why this file alone doesn't seem to get retrieved by the LE container upon restart (assuming this issue isn't specific to me)?