Brianf

Members
  • Posts

    9
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Brianf's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Was able to fix the issue by creating another user, and then registering that user via the reverse proxy i.e use the external fqdn I setup in cloudflare and nginx proxy manager, the Authenication field then appeard after logging out and back in with new user via reverse proxy, then logged out and tried my origional user again and the field appeared as it should, seems like even though I cleared the cache something didn't clear prorpely I have removed the new user and it's still working see below
  2. I am having the same issue using nginx proxy manager, if I go to the local address then it works fine, I have other sites where I user built in TOTP are fine, so is Authelia, I tried clearing browser cache, using different browsers, turning off caching on nginx proxy manager, cleared cache on cloudflare, put cloudflare into dev mode (turns off all caching). I was able to register and get qrcode first time around but that was done using the local address.
  3. are you using a proxy or some form of ad blocking, if so try disabing it
  4. Hi, I'm new to unraid and have been building/configuring and testing my first unraid server, things have been going really well, the system is a dual xeon e5-2680v2 with 64gb of ecc ram, the board I am using is the Asrock Rack ep2c602-4l/d16, for some reason the PCI port (not PCIe) on this board is having an issue with every Intel PCI NIC I try to use, for some reason (I beleive BIOS releated) it blocks the ability to communicate with the eeprom on any Intel NIC I try. I am fairly confident it is just a BIOS issue as I have confirmed these cards are fine on other systems using the Intel Bootutil tool, when using the same tool on the Asrock board it says "device not intitalized", this issue in turn caused a problem for Unraid in that the e1000 driver probe throws a checksum error and then sets the cards MAC to all 0's which prevents operation. The cards themselves do show up as devices in Unraid, and if I boot Windows directly they will function (as Windows doesn't care about checksum), I have read several Linux posts and this is a well know problem, fixes/workarounds are listed and I have tried include 1. Stubbing the card and passing it to a VM - this causes a device manager error in windows, did not investiagte further as not what I really want to do. 2. Use modeprobe -r to unload the e1000 module, then use modprobe eeprom_bad_csum_allow=1 - seems to have no effect 3. Add e1000.eeprom_bad_csum_allow=1 to the syslinux append line - has no effect, mu append line looks like 'append vfio-pci.ids=1b73:1100 isolcpus=4-9,14-19,24-29,34-39 xen-pciback.hide=(10:00.0)(14:00.0)(15:00.0) e1000.eeprom_bad_csum_allow=1 initrd=/bzroot'. 4. Recompile the module with the checksum documented out - have not yet done as this is trickier to do on unraid than other linux distro's and more of a hack than I really wanted to do. I have passed all details on to Asrock who are investigating why the board is behaving this way but it's been over a week with no feedback as yet apart from it will take some time as they don't have ready access to PCI network cards apparently. This board has 4 onboard NIC's so this isssue is by no means a showstopper and I can use the xen-pciback.hide= to allow passthrough of the individual interfaces, however I would like if possible to run the unraid server off the PCI NIC and have the 4 other interfaces stubbed for other projects. I am hoping one of you may be able to confirm if the eeprom checksum can be disabled without having to recompile the e1000 module. Thanks Brian
  5. remember these CPU's support quad channel and there are 2 CPU's so you want at least 8 sticks (4 per CPU) of memory to take advantage of the extra bandwidth
  6. For Memory I went with 2 packs of https://www.ebay.com/itm/32GB-4X8GB-DDR3-1600MHz-ECC-REG-MEMORY-FOR-ASRock-EP2C602-4L-D16-SSI-EEB-Server/163099880305?ssPageName=STRK%3AMEBIDX%3AIT&_trksid=p2057872.m2749.l2649
  7. You may be able to get it cheaper on ebay https://www.ebay.com/itm/ASRock-EP2C602-4L-D16-SSI-EEB-Server-Motherboard-Dual-LGA-2011-Intel-C602-DDR3-1/142892539545?epid=7003305837&hash=item21450f1e99%3Ag%3AFPQAAOSwxllbIiRt&_sacat=0&_nkw=Asrock+Rack+ep2c602-4l%2Fd16&_from=R40&rt=nc&_trksid=m570.l1313, also remember this board has no USB 3.0 if you want that you will need a card.
  8. Can I ask why would you passthrough the Marvell controller? I am using it fine within the array, setup shares and then map the drives to the VM's, that way I can use all 14 Sata port on the board with no issues.
  9. As Hoopster Has said requirement #3 upps things quite a bit. I am currently building my first Unraid Server and it's turning into quite an adventure my requirments are almost exactly the same as yours 1) Nas/File shares for Houshold Data. 2) Plex transcoding to server up to 4/5 chromecasts at once. 3) 3 simultanious Windows 10 Gaming VM's capable of AAA titles 1 at up to 1440p, I have an Acer predator X34 4) Dockers for Newsgroups, Torrents, Audo Streaming and Unifi mangement. 5) VM Test Lab/Sandpit (this will be only used for my own personal learning when no one else is gaming) So Far I have Installed Unraid and configured the dockers, and 1 VM for testing and am very happy with the performance, I still have some hardware to come and need to build a custom case for this (when you see the specs you'll understand why). Specs are M/Board - Asrock Rack ep2c602-4l/d16 (Dual Xeon board), I love this board may be old but is almost perfect for what I need, especially with the bifurcation support on 2 of the x16 CPU's - 2 x Xeon E5-2680 v2 (20 cores in total), currently cooled by cheap AIO's but will get waterblocks later GPU's - 2 x Gigabyte 1070 WF OC V2 (will get Waterblocks thanks to Bykski making them) + 1 x MSI Gaming x 1050TI for the wife's vm she plays very few games. Memory - 64GB ECC 8 x 8GB 1600 ( with Jonsbo RGB Heatsinks, cos) PSU - 2 x EVGA Supernova 650 G+ (much cheaper than single 1300w and has more connectors) Array Drives - 3 x 8TB Ironwolf (New), 6 x 4TB WD blue and Segate, and 1-2TB additionals I already had Cache - Samsung EVO 500GB 2.5 SSD VM Drive - ASUS Hyper quad x16, this will house only 1 x ADATA SX8200 480 NVME M.2 for now but will get more later as required. (Awating arrival) Additional Cards - Sonnettech Allegro Pro USB 3.0 PCIe Card, gives me 4 controller I can pass to VM's, then use Hubs for each VM. Intel PWLA8391GT PRO/1000 GT (Awaiting arrival), this nic will be used for UNRAID allowing me to pass the 4 onboard Intel GB NIC to the VM's Additional stuff - 3 x Startech 4 Dive trayless drive bays, 4 x Shielded PCIe x16 Risers, 1 x PCI riser, 1 x PCIe 4 x Riser, Modded EVGA Powerlink adapters for the GPu's, bay Card reader/USB 3.0/USB 2.0 with USB C connector, various USB 3.0 hubs. Based on my testing I can tick every box, I was origionally looking at Threadripper however was worried about stability/compatability and single CPU Intel really doesn't have the PCIe lanes, plus costs is also a big factor given that the Motherboard CPU's and RAM cost me ust over $1000 USD ~$1400 AUD, whereas 1950x at that time was $1300 AUD and RAM required both Kidneys to afford, I decided to jump on New old Stock board and took my chances, which have paid off, I have been building and benching while I wait for parts to arrive and, I am really enjoying this, I spent over a month working out the parts and still made some mistakes, for example I purchased a Startech PEXUSB3544V USB 3.0 card and could not get it to work, these cards here in AUS are really expensive, so bit of a costly mistake. Do lots of research and don't skimp on your base i.e. Motherboard, CPU & RAM, you can always add/upgrade the rest as you go.