• Posts

  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

jamesp469's Achievements


Newbie (1/14)



  1. Thanks, that's exactly what it was. I forgot I had removed a GPU to put this card in, and that GPU was being passed through to my VM. All set now!
  2. I'm having issues adding a second HBA card to my build. I recently purchased a CSE-836 JBOD unit online, as well as a 9207-8e and two SFF-8088 to SFF-8088 cables to connect them. The JBOD (CSE-836E16-R92JBD) has a SAS2 backplane (BPN-SAS2-836EL1). My original build has an E3-1230v5 in a X11SSM-F with a 9201-8i (in CPU Slot7, x8). I've tried adding the 9207-8e to CPU Slot6 (x8 in a x16), and while both are showing as available in the BIOS I'm still running into issues adding it to Unraid. BIOS is 1.0b. When I initially added the internal card, I recall having issues getting it working as well, and believe the solution was getting into the VFIO-PCI config to bind it to the vfio-pci driver. (though this may have been for VM access). With PCIe ACS Override enabled, I'm seeing each card in a separate IOMMU group IOMMU group 11:[1000:0087] 01:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05) IOMMU group 12:[1000:0072] 02:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03) When I've attempted to Disable the ACS override, the system wouldn't see any of the drives connected to the original card. When I tried binding both to the vfio-pci driver, I didn't see any of the JBOD drives. Not entirely sure where to go from here, any help would be appreciated. Thanks in advance. unraid-diagnostics-20210111-1539.zip
  3. You need to change "-----BEGIN PRIVATE KEY-----" to "-----BEGIN RSA PRIVATE KEY-----" in your key crt file to get this to work. No idea when the requirement changed, but this fixed the issue for me last night.
  4. I have had this running for years just fine using the docker container. Wonderful tool.
  5. I'm having an issue with RSS feeds over VPN. The provider I am pulling the RSS feeds from requires that the same IP address used to set up the feed is the same IP address used to load the feed/download from. I am running this container on my server, and set everything up using my desktop using the VPN feed. The issue I am running into is that, when my desktop is not on VPN and I attempt to reload the feed through the container's GUI (container is on correct VPN IP), I get errors that the IP does not match the VPN ID. I've ran multiple checks with checkmyip.torrentprivacy.com, and the container's external IP is still showing as the correct VPN ID. However, if I log into my VPN service on the desktop and try to reload the feed, it works perfectly. This is only an issue with RSS; when I download torrents off the VPN-connected desktop, disconnect from VPN, and load them to the rTorrent container, they download as expected. I will leave the feed alone for a few weeks to see if it is able to automatically pull from the provider, but am a bit befuddled as to why this isn't working as (I) expected.
  6. Thank you! This fixed it. Not sure why the default gateway assignment on eth1 was set like that, i don't recall setting that.
  7. After upgrading to 6.8, my docker tab hangs with no resolution. Actual docker URLs are still accessible; diagnostics attached. I'm also seeing the following error in the log: Dec 11 09:09:09 unRAID nginx: 2019/12/11 09:09:09 [error] 7631#7631: *21 upstream timed out (110: Connection timed out) while reading response header from upstream, client:, server: , request: "GET /plugins/dynamix.docker.manager/include/DockerContainers.php HTTP/2.0", upstream: "fastcgi://unix:/var/run/php5-fpm.sock", host: "MYADDRESS.unraid.net", referrer: "https://MYADDRESS.unraid.net/Docker" Also, all of my plugins are showing as status "unknown" and seem to hang, with the following error: Dec 11 09:16:03 unRAID root: Fix Common Problems: Error: Unable to communicate with GitHub.com Thanks! unraid-diagnostics-20191211-0917.zip
  8. I'm seeing this issue as well, plus all of my installed plugins are listed as status "unknown". Diagnostics attached, thanks! unraid-diagnostics-20191211-0917.zip
  9. I'm running into issues where Mylar regularly disables my search providers. After digging into the logs, it looks like it occurs after daily API limits are hit (WARNING DAILY API limit reached. Disabling provider usage until 12:01am) however the provider usage is not automatically re-enabled. I have always had to manually enable the provider. Is this something I should go to the Mylar team on, or something that can be managed at the docker level (force re-enabling search providers)
  10. The repair service recommended cloning/moving the data off of the drive whenever possible. It was only a PCB replacement, so not sure if that was a liability-relief statement or not.
  11. UPDATE: I was able to get one data drive repaired, and I expect it to be delivered back here tomorrow. Can someone point me to the correct process for turning my system back up? I have brand new drives to replace the blown 2nd parity drive and the other data drive that was not repaired, as well as a new drive to replace the repaired drive once my parity is rebuilt.
  12. Thanks, I've sent one of the data drives in to get tested and see if it a PCB replacement would fix it. Hopefully it does, and then I can re-install that plus two drives to replace the parity2 and other data drive and hopefully be back in business.
  13. So, in this situation, I have 5/8 drives in my array still spinning up and presumably with no data issues: dual cache, parity1, and 2of4 disks. Is there any scenario where I could rebuild partial data to a new drive using the parity drive, or should I just clear the drive and start fresh? Also, will i have any issues keeping the existing data on my cache pool and two working drives when i start a new array config? Thanks in advance.
  14. Thanks for responding. The mistake was using a modular PSU cable from my old PSU/build in my new PSU/build, leaving me with five drives that won't spin up (two are net new to the system). So that won't be happening again. But if that's the case regarding repairing the parity drive, then yes it's probably safer to just bite the bullet on repairing the two data drives.