rragu

Members
  • Content Count

    49
  • Joined

Community Reputation

0 Neutral

About rragu

  • Rank
    Newbie

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. For anyone stumbling across this post having the same issue, this problem *appears* to have been solved by simply updating my BIOS. From some cursory Googling, the problem appeared to be related to overclocking my memory (to be clear, I was only running it at the XMP-rated 3600MHz speed). I had already run a memTest on the memory sticks which brought up no errors. So, I simply updated my BIOS to see if that would solve the problem. I haven't had any Machine Check Events warnings since.
  2. First off, thank you @Sycotix for your Authelia CA container as well as your video series on YouTube. Very helpful and detailed! I've set up Authelia using a combination of your video and this blog post by Linuxserver. I mostly followed your video except for the end where I used SWAG instead of NPM. I've tested Authelia by protecting two endpoints: Syncthing and Tautulli. A few questions: 1) When I go to https://syncthing.mydomain.com, I get a distorted Authelia login page (please see attached images), whereas when I go to https://tautulli.mydomain.com,
  3. So, I'm in the process of setting up my unRAID server when I got a notification regarding Machine Check Events. I've attached the diagnostics. The relevant part of the syslog appears to be: Can anyone please help me to understand this output? My server's component details are in my signature. Thanks! server-diagnostics-20210604-2332.zip
  4. Sorry, just to make sure I'm understanding you right: I don't need to do anything to the 1080 Ti primary GPU other than bind it to VFIO via System Devices? I don't need to specify the vBIOS in the W10 VM's config/XML etc.?
  5. Thanks! Are there any possible issues that could occur as a result of stubbing the primary GPU (just wondering if there is something to look out for)?
  6. Hi, not entirely sure if this is the right place to post this but here goes: My setup: - CPU: R9 3900X - Motherboard: Asus Crosshair VIII Hero - PCIe x16 Top Slot: GTX 1080 Ti - PCIe x16 Second Slot: Quadro P2000 - PCIe x16 Third Slot: LSI 9207-8i - Running unRAID 6.9.2 What I want to accomplish: - Pass through the primary GPU (1080Ti) for a W10 VM for gaming - Use the secondary GPU (P2000) for Plex/Emby hardware transcoding From what I understand, I need to: 1) Dump the vBIOS (following SpaceInvaderOne's
  7. Thanks! I lowered the checkers to 2 and transfers to 1. Combined with a chunk-size of 256M, I get the same ~80MBps with half the CPU utilization as before, even without --ignore checksum
  8. Just tried out "rclone copy"....the difference is night and day Test files: 4 files (total of 12.3 GB; between 2.3-3.6GB each) Average transfer speed using rclone mount: 19.4MB/s Average transfer speed using "rclone copy": 60.9MB/s Average transfer speed using "rclone copy" and chunk-size 256M: 78.1MB/s The only drawback is heightened CPU/RAM usage but I'm sure I can manage that with a script like you mentioned. Thanks very much for all your help!
  9. Thanks! I'll look into the resources you posted. As for not writing to the rclone Google Drive mount, (1) it's a slightly more widely known tip now 😅, (2) while I'll switch to using "rclone copy", is there any particular negative effect to transferring data to Google Drive in the way I've been doing (e.g. data loss/corruption) or is it just lower performance?
  10. Hi, I recently set up rclone with Google Drive as a backup destination using SpaceInvaderOne's guide. While archiving some files, I noticed that my files were being uploaded at around 20MBps despite having a gigabit FiOS connection. Based on some Googling, I'm thinking increasing my chunk size might improve speeds. But how do I go about increasing the chunk size? I've attached my rclone mount script if that's of any help. Also, how does this affect the items I have already uploaded (if it affects them at all)?
  11. Well it'll be a seven hour drive. Personally, I'm willing to completely waste an hour of my time to gain that bit of peace of mind (even if it might be illusory 🤷‍♂️). Besides, what with quarantining, each hour of my time is suddenly much less valuable... As for the heatsink, I use an AIO (probably also overkill for this use-case; but I had it left over from another build). I'm thinking that an AIO shouldn't need to be removed, as it's not a hunk of metal like a NH-D15 etc.?
  12. I'm planning to move my server from my parents' house to mine. So far I'm planning on: - running a backup via Duplicacy and the Backup/Restore Appdata plugin (I already do this daily and weekly respectively) - running a parity check before the move - noting which HDD is connected to which SATA port - removing the HDDs and expansion cards and packing them safely for the drive - reinstalling the components post-move in the same manner they were pre-move - running another parity check to ensure there was no damage to the HDDs as a result of the drive
  13. When you say "well-written", do you mean on the part of the container creator or the underlying service? For example, I generally prefer to use the Linuxserver variant of a given container. Presumably, those would count as well-written? Also, with regards to data loss, I imagine that depends on whether data is actively being written (e.g. my Nextcloud and Bookstack containers are usually NOT writing data whereas my telegraf container is constantly writing to InfluxDB)? In any case, since I'm certainly not knowledgeable enough to know if an app is well written, I suppose
  14. I have a few instances where Docker containers work in combination (e.g. Nextcloud and Bookstack each work with mariadb; Telegraf, InfluxDB and Grafana all work together etc). When there is a update for one or more of these Docker containers, is there a recommended way to update (e.g. stop everything, update everything, then restart in a specific order)? Or can I just update any individual container as and when I please without stopping any other "upstream" or "downstream" containers?
  15. My standard disclaimer: I only know enough to break things that I don't know how to fix... I've written my go file such that at boot, I get my array passphrase via AWS Secrets Manager and write it to /root/keyfile. unRAID then uses /root/keyfile to unlock/startup my array. I've been manually deleting my keyfile after startup. The aws-cli command I use for the procedure above retrieves a string, not a file. So, is it possible to use the output of this command as the passphrase rather than writing it to a keyfile first? Thanks!