rragu

Members
  • Posts

    52
  • Joined

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

rragu's Achievements

Rookie

Rookie (2/14)

0

Reputation

  1. EDIT: SOLVED Looks like my vBIOS was the issue! Despite following SpaceInvaderOne's video/script to dump the vBIOS from my card, that vBIOS doesn't appear to work. it was only after I used one of the compatible vBIOSes from TechPowerUp that I was able to see the following in the VM's Device Manager: I also undid the ACS override and stuck with i440fx as Machine Type
  2. As with DemoRic above, I get the following error: "Fatal error: Cannot redeclare _() (previously declared in /usr/local/emhttp/plugins/parity.check.tuning/Legacy.php:6) in /usr/local/emhttp/plugins/dynamix/include/Translations.php on line 19" although I get it once the Array is started and I'm logged into the dashboard (not only when the array is stopping). I don't think I got it before installing v2021.09.10.
  3. I've been trying and failing to get my graphics card passed through to a Windows 10 VM for a few hours now no matter what I try. I'm going to need some help to go any further. I've been largely following this guide on setting up remote gaming Details/Settings: GTX 1080 Ti is in the motherboard's top slot (so I guess that makes it the primary GPU?) ACS override: set to Both VFIO: Both graphics and sound devices stubbed via Tools>System Devices Boot: Legacy boot After adding the graphics card with settings as detailed above and booting the VM, Device Manager doesn't recognize any nVidia card as being installed. All I see under Display Driver in Device Manager is Microsoft Basic Display Adapter and Microsoft Remote Display Adapter. If I go ahead install the nVidia drivers anyway, I then get Code 43 (which I suppose isn't surprising at all if Windows doesn't recognize the GPU in the first place). Ideas on how to move forward from here would be very much appreciated. Thanks!
  4. For anyone stumbling across this post having the same issue, this problem *appears* to have been solved by simply updating my BIOS. From some cursory Googling, the problem appeared to be related to overclocking my memory (to be clear, I was only running it at the XMP-rated 3600MHz speed). I had already run a memTest on the memory sticks which brought up no errors. So, I simply updated my BIOS to see if that would solve the problem. I haven't had any Machine Check Events warnings since.
  5. First off, thank you @Sycotix for your Authelia CA container as well as your video series on YouTube. Very helpful and detailed! I've set up Authelia using a combination of your video and this blog post by Linuxserver. I mostly followed your video except for the end where I used SWAG instead of NPM. I've tested Authelia by protecting two endpoints: Syncthing and Tautulli. A few questions: 1) When I go to https://syncthing.mydomain.com, I get a distorted Authelia login page (please see attached images), whereas when I go to https://tautulli.mydomain.com, I get the usual Authelia login page. This is the case on desktop Firefox, Chrome, and Edge. I don't suppose you've seen this before? Any ideas as to why this might be? The distorted page is still functional (just not as pretty). EDIT: tried on mobile Chrome (iOS) and mobile Safari. For both mobile browsers, both Syncthing and Tautulli give me the distorted Authelia page. 2) In any case, once I login, I get to another login prompt. Obviously this is from the authentication I enabled before Authelia was set up. So, now that Authelia is protecting these services, am I good to just disable the "internal" (for lack of a better word) authentication for these services? 2a) I disabled the basic GUI auth for Syncthing. And while Authelia of course still protects Syncthing, I do now get a bright red warning message from Syncthing that I need to set GUI authentication. Is there any way to make Syncthing aware of Authelia or link them in some way so that the warning message goes away? 3) For the majority of my reverse-proxied services, I will probably be the only one who needs to access them. But for certain services (e.g. Ombi) where I would have multiple users, how do I set it up such that userX and userY logging in via Authelia automatically signs in userX and userY, respectively, to the desired service? Thanks for any and all help!
  6. So, I'm in the process of setting up my unRAID server when I got a notification regarding Machine Check Events. I've attached the diagnostics. The relevant part of the syslog appears to be: Can anyone please help me to understand this output? My server's component details are in my signature. Thanks! server-diagnostics-20210604-2332.zip
  7. Sorry, just to make sure I'm understanding you right: I don't need to do anything to the 1080 Ti primary GPU other than bind it to VFIO via System Devices? I don't need to specify the vBIOS in the W10 VM's config/XML etc.?
  8. Thanks! Are there any possible issues that could occur as a result of stubbing the primary GPU (just wondering if there is something to look out for)?
  9. Hi, not entirely sure if this is the right place to post this but here goes: My setup: - CPU: R9 3900X - Motherboard: Asus Crosshair VIII Hero - PCIe x16 Top Slot: GTX 1080 Ti - PCIe x16 Second Slot: Quadro P2000 - PCIe x16 Third Slot: LSI 9207-8i - Running unRAID 6.9.2 What I want to accomplish: - Pass through the primary GPU (1080Ti) for a W10 VM for gaming - Use the secondary GPU (P2000) for Plex/Emby hardware transcoding From what I understand, I need to: 1) Dump the vBIOS (following SpaceInvaderOne's video) for the 1080Ti since it's an nVidia GPU in the primary slot 2) Install this nVidia plugin to use the P2000 for hardware transcoding in Docker My question is: Apart from 1 & 2 above, is there anything special I need to do to accomplish my goals (e.g. stubbing the primary GPU or something like that)? N.B.: if switching the GPUs (i.e. put the P2000 in the primary slot) would somehow make things easier, unfortunately I can't. My 1080Ti is a 2.5 slot card and there isn't enough clearance between the second PCIe slot and the LSI HBA in the third PCIe slot.
  10. Thanks! I lowered the checkers to 2 and transfers to 1. Combined with a chunk-size of 256M, I get the same ~80MBps with half the CPU utilization as before, even without --ignore checksum
  11. Just tried out "rclone copy"....the difference is night and day Test files: 4 files (total of 12.3 GB; between 2.3-3.6GB each) Average transfer speed using rclone mount: 19.4MB/s Average transfer speed using "rclone copy": 60.9MB/s Average transfer speed using "rclone copy" and chunk-size 256M: 78.1MB/s The only drawback is heightened CPU/RAM usage but I'm sure I can manage that with a script like you mentioned. Thanks very much for all your help!
  12. Thanks! I'll look into the resources you posted. As for not writing to the rclone Google Drive mount, (1) it's a slightly more widely known tip now 😅, (2) while I'll switch to using "rclone copy", is there any particular negative effect to transferring data to Google Drive in the way I've been doing (e.g. data loss/corruption) or is it just lower performance?
  13. Hi, I recently set up rclone with Google Drive as a backup destination using SpaceInvaderOne's guide. While archiving some files, I noticed that my files were being uploaded at around 20MBps despite having a gigabit FiOS connection. Based on some Googling, I'm thinking increasing my chunk size might improve speeds. But how do I go about increasing the chunk size? I've attached my rclone mount script if that's of any help. Also, how does this affect the items I have already uploaded (if it affects them at all)?
  14. Well it'll be a seven hour drive. Personally, I'm willing to completely waste an hour of my time to gain that bit of peace of mind (even if it might be illusory 🤷‍♂️). Besides, what with quarantining, each hour of my time is suddenly much less valuable... As for the heatsink, I use an AIO (probably also overkill for this use-case; but I had it left over from another build). I'm thinking that an AIO shouldn't need to be removed, as it's not a hunk of metal like a NH-D15 etc.?
  15. I'm planning to move my server from my parents' house to mine. So far I'm planning on: - running a backup via Duplicacy and the Backup/Restore Appdata plugin (I already do this daily and weekly respectively) - running a parity check before the move - noting which HDD is connected to which SATA port - removing the HDDs and expansion cards and packing them safely for the drive - reinstalling the components post-move in the same manner they were pre-move - running another parity check to ensure there was no damage to the HDDs as a result of the drive A few questions: 1) Is there anything else I should be considering? 2) Currently, my server has a DHCP reservation of 192.168.x.y; the DHCP reservations at my house follow a slightly different scheme. Apart from simply creating a new reservation for the server on my router, is there anywhere within unRAID I need to manually update? 3) I run a number of reverse-proxied services on unRAID. Since I run cloudflare-ddns, I take it Cloudflare will automatically be updated with the new public IP (i.e. I don't need to do anything or reinstall LetsEncrypt etc.)? Thanks for any help/advice!