Jump to content

rragu

Members
  • Content Count

    43
  • Joined

  • Last visited

Everything posted by rragu

  1. Thanks! I lowered the checkers to 2 and transfers to 1. Combined with a chunk-size of 256M, I get the same ~80MBps with half the CPU utilization as before, even without --ignore checksum
  2. Just tried out "rclone copy"....the difference is night and day Test files: 4 files (total of 12.3 GB; between 2.3-3.6GB each) Average transfer speed using rclone mount: 19.4MB/s Average transfer speed using "rclone copy": 60.9MB/s Average transfer speed using "rclone copy" and chunk-size 256M: 78.1MB/s The only drawback is heightened CPU/RAM usage but I'm sure I can manage that with a script like you mentioned. Thanks very much for all your help!
  3. Thanks! I'll look into the resources you posted. As for not writing to the rclone Google Drive mount, (1) it's a slightly more widely known tip now 😅, (2) while I'll switch to using "rclone copy", is there any particular negative effect to transferring data to Google Drive in the way I've been doing (e.g. data loss/corruption) or is it just lower performance?
  4. Hi, I recently set up rclone with Google Drive as a backup destination using SpaceInvaderOne's guide. While archiving some files, I noticed that my files were being uploaded at around 20MBps despite having a gigabit FiOS connection. Based on some Googling, I'm thinking increasing my chunk size might improve speeds. But how do I go about increasing the chunk size? I've attached my rclone mount script if that's of any help. Also, how does this affect the items I have already uploaded (if it affects them at all)?
  5. Well it'll be a seven hour drive. Personally, I'm willing to completely waste an hour of my time to gain that bit of peace of mind (even if it might be illusory 🤷‍♂️). Besides, what with quarantining, each hour of my time is suddenly much less valuable... As for the heatsink, I use an AIO (probably also overkill for this use-case; but I had it left over from another build). I'm thinking that an AIO shouldn't need to be removed, as it's not a hunk of metal like a NH-D15 etc.?
  6. I'm planning to move my server from my parents' house to mine. So far I'm planning on: - running a backup via Duplicacy and the Backup/Restore Appdata plugin (I already do this daily and weekly respectively) - running a parity check before the move - noting which HDD is connected to which SATA port - removing the HDDs and expansion cards and packing them safely for the drive - reinstalling the components post-move in the same manner they were pre-move - running another parity check to ensure there was no damage to the HDDs as a result of the drive A few questions: 1) Is there anything else I should be considering? 2) Currently, my server has a DHCP reservation of 192.168.x.y; the DHCP reservations at my house follow a slightly different scheme. Apart from simply creating a new reservation for the server on my router, is there anywhere within unRAID I need to manually update? 3) I run a number of reverse-proxied services on unRAID. Since I run cloudflare-ddns, I take it Cloudflare will automatically be updated with the new public IP (i.e. I don't need to do anything or reinstall LetsEncrypt etc.)? Thanks for any help/advice!
  7. When you say "well-written", do you mean on the part of the container creator or the underlying service? For example, I generally prefer to use the Linuxserver variant of a given container. Presumably, those would count as well-written? Also, with regards to data loss, I imagine that depends on whether data is actively being written (e.g. my Nextcloud and Bookstack containers are usually NOT writing data whereas my telegraf container is constantly writing to InfluxDB)? In any case, since I'm certainly not knowledgeable enough to know if an app is well written, I suppose I'll stick with my existing protocol of stopping all affected containers before updating. Small price to pay for peace of mind...
  8. I have a few instances where Docker containers work in combination (e.g. Nextcloud and Bookstack each work with mariadb; Telegraf, InfluxDB and Grafana all work together etc). When there is a update for one or more of these Docker containers, is there a recommended way to update (e.g. stop everything, update everything, then restart in a specific order)? Or can I just update any individual container as and when I please without stopping any other "upstream" or "downstream" containers?
  9. My standard disclaimer: I only know enough to break things that I don't know how to fix... I've written my go file such that at boot, I get my array passphrase via AWS Secrets Manager and write it to /root/keyfile. unRAID then uses /root/keyfile to unlock/startup my array. I've been manually deleting my keyfile after startup. The aws-cli command I use for the procedure above retrieves a string, not a file. So, is it possible to use the output of this command as the passphrase rather than writing it to a keyfile first? Thanks!
  10. Wasn't really sure which sub-forum to post this in, but here goes: I setup the trio of Telegraf+InfluxDB+Grafana recently and noticed the following curious behavior from my GPU: I installed my GPU almost a year ago. At idle, the GPU usage was obviously 0% with fan usage ~50%. The GPU statistics plugin confirmed as much. However, since I set up Telegraf et al. a week or so ago, "idle" GPU usage hovers between 2-5% with fan usage at ~65% for extended periods. Upon checking nvidia-smi, it reports "no running processes". The issue also appears to go away on its own. Any idea what could be causing this behavior? Admittedly, it doesn't seem to be affecting transcoding results in any way. But it's just weird that I'm seeing sustained, albeit low, GPU and fan utilization despite "no running processes". I didn't notice this behavior before setting up Telegraf etc. (i.e. when I only had GPU statistics plugin installed). That said, I'm pretty sure that all Telegraf did was alert me to an existing issue.
  11. My standard disclaimer: I only know enough to break things that I don't know how to fix... Question 1: I've written my go file such that at boot, I get my array passphrase via AWS Secrets Manager and write it to /root/keyfile. unRAID then uses /root/keyfile to unlock/startup my array. I've been manually deleting my keyfile after startup. Can I just add the following to the go file to automatically delete the keyfile 5 minutes after startup: sleep 300s shred /root/keyfile Or should I just write a user script with the above commands via the User Scripts plugin to be executed after Array start? Question 2: From what I've managed to glean from the forums, in unRAID 6.8+, passphrases seem to be more secure than keyfiles as passphrases are not written to a visible-to-user file (even ones that only exist in RAM). The aws-cli command I use for the procedure above retrieves a string, not a file. So, is it possible to use the output of this command as the passphrase rather than writing it to a file first? Thanks!
  12. Thanks! EDIT: Solved both issues as described in my post above.
  13. @saarg Thanks! uBlock Origin was the culprit. Apparently, it's not a fan of duckdns.org? I had planned to switch from duckdns to cloudflare-ddns anyway. After doing so, the site is working properly in Firefox with uBlock Origin still enabled.
  14. I recently set up Telegraf+InfluxDB+Grafana (+HDDTemp). In order to get Nvidia GPU stats, I changed my telegraf repository from alpine to latest. Life was good. However, I've just noticed that, in the Docker tab of unRAID, the Telegraf icon is missing and the docker container name is no longer a link to edit the template. If I click the docker icon, I only have the options of Console, Start, Stop, Pause, Restart, and Remove. My Grafana dashboard is still populating properly. So, Telegraf still appears to be doing its job. That said, can anyone help me figure out what I broke? Or is this just the price of switching the repository (I would switch back to confirm this but as stated, I can't edit the template)? EDIT: Found the problem...somehow the my-telegraf.xml file at /boot/config/plugins/dockerMan/templates-user was deleted. Not a clue how that happened as I don't make a point of rooting around in /boot unnecessarily. In any case, thanks to the CA Backup/Restore Appdata plugin, I copied the file back to /boot. A refresh of the Docker tab shows I'm back to normal (icon and all).
  15. I installed LSIO's code-server container and set up a reverse proxy using the LSIO LetsEncrypt container. My issue is: on Firefox, once I go to code.mydomain.com and log in with the password I set in the code-server container, I just get a blank page. However, the same site works perfectly fine on Chrome and Edge. I know I'm being kinda vague. But any ideas as to why this might be or what I should start checking for?
  16. Don't know if this is expected behavior: I just updated Docker Folder to v2020.05.03 (not that I was having any issues with the previous version); I then switched to the Docker tab to see all my folders were deleted. I'm now back to the original list of containers. Edited to add: While I was able to update containers, I cannot stop them (i.e. I clicked the container icon, clicked Stop and nothing happened). Uninstalled Docker Folder and I am able to start/stop containers again.
  17. Hi, I installed recordings-converter and tested it out on a couple of files. Conversion logs are attached. A few questions: 1) I think I've set it up correctly to use my GPU. Is the attached configuration correct? 2) While the docker was doing its work, I did see load on the GPU and an ffmpeg process. But there was still 5-10% load on the CPU (and no other docker should have been using that much CPU resources at the time). Is this expected behavior? 3) Does the container work on multiple files sequentially or concurrently? So far I've tested the container only on folders with one recording in them. When I switch to my entire recordings folder, I just want to make sure my other containers/VMs etc aren't affected. 4) I am using my GPU for hardware transcoding in Plex and Emby as well. It is my understanding that as long as my GPU isn't passed through to a VM, multiple Dockers can use the GPU concurrently. Is my understanding correct? Thanks for your help and a bigger thanks for all your work on these containers. Definitely makes my life easier! postProcess.18-03-2020-0712.log postProcess.18-03-2020-0839.log
  18. Aha. Thanks! Another request: is this something that could be integrated into NerdPack?
  19. Could you please add back jq (assuming there are no problems with it on 6.8, as it seems it actually was part of NerdPack a while ago)? Thanks
  20. Hi, I believe that starting with 6.8, unRAID no longer saves a passphrase to a keyfile. So, does this mean the only way to autostart an encrypted array is to use a keyfile? Assuming there is some way to autostart using a passphrase: So I have my server at my parents' place since they have Gigabit internet and I don't. I generally use the OpenVPN docker to administer the server, although I do have a Raspberry Pi on their LAN that I connect to via VNC if OpenVPN isn't working properly. I was planning on storing the passphrase on the Raspberry Pi and having the server retrieve it via SMB at start. Is that advisable or should I configure it some other way? Any security concerns/issues to navigate?
  21. Ahh I see. That makes more sense. Yeah I think I'll just set Media to be cache-enabled. I had worried about unnecessary writes on the cache disk. But, I believe SSD drive endurance is high enough now that I'm more likely to replace the drive due to obsolescence before failure due to endurance
  22. Syncthing is set to cache: Yes Media is set to cache: No According to the tooltip in the Share Settings page, "No prohibits new files and subdirectories from being written onto the Cache disk/pool." So if (1) content is on the Syncthing share and the cache drive and (2) I then move it to the Media share using Krusader, my expectation is that content moves from Syncthing share on the cache drive to the Media share on the array (bypassing the cache drive since Media is NOT cache enabled). But that doesn't seem to happen. When I check the cache drive contents, there is a Media folder on the cache drive with the contents I just transferred. EDIT: I just saw @remotevisitor's explanation above
  23. Running on 6.8.1 I have an unRAID server at my parents' house, mainly because they have gigabit FiOS and I don't (for the same price as my 200Mbps, I might add; but that's a rant for another day). I live in another city and I usually use Syncthing to add any content from my personal desktop to the unRAID server. I then use Krusader to move content from the Syncthing share to whatever share it needs to go into. I have noticed though that when I transfer content from the Syncthing share (which is cache-enabled) to, for example, my Media share (which is NOT cache-enabled), the content nevertheless remains on the cache drive. And since the Media share is not cache-enabled, the content never moves to the array even when Mover is invoked in the early morning. Now, I suppose that I could wait for Mover to move the content from cache to array within the Syncthing share first and then move to Media. But, that could take up to a day and I'm impatient. So my question is: is this expected behavior or do I have some incorrect settings in place?
  24. I just checked the LSIO GitHub reverse-proxy-conf repository (https://github.com/linuxserver/reverse-proxy-confs) and actually there is a 'calibre-web.subdomain.conf.sample' file here. 1) I assume I can just copy that file over into my LE container manually and continue as usual? 2) Maybe something to look into regarding why this file alone doesn't seem to get retrieved by the LE container upon restart (assuming this issue isn't specific to me)?