CybranNakh

Members
  • Posts

    27
  • Joined

  • Last visited

Everything posted by CybranNakh

  1. Look to be, yes. Using NTP, correct timezone, and google time servers.
  2. Hello! I have installed Guacamole according to the SpaceInvaderOne video but when enabling TOTP, I get to the login QR code and I am able to add the MFA to the Microsoft Authenticator, but it continually tells me "Verification failed. Please try again". I have tried a clean reinstall, and clearing browser history and a different browser. I have also tried a different MFA app. Nothing seems to work. Any ideas? Thank you very much!
  3. +1 This worked! Thank you very much!
  4. It really is unfortunate... I was so excited to set it up. I might try and see if I can find something on the github repository and report back....
  5. I have some problems with the Reactive Resume Container. I downloaded it from CA and put in my Firebase information but cant get to the loading screen. The webui flashes for 1 second and then just displays the background. I feel like I am missing something basic here...
  6. +1 This was very much lead to the solution! I added the path as recommended in the discussion linked. I also discovered the default app name in SWAG was nginx_recipes and this needs to be changed to the name of your container (recipes) for a simple CA container pulldown.
  7. For some reason I am unable to access it using a reverse proxy (Swag). I have set it up like every other app I have using my reverse proxy. I saw some notes saying to edit the conf.d in the ngnix folder of recipes but cant seem to find it in the appdata folder for recipes in unraid.... I feel like im missing something here.... any thoughts? Thanks!
  8. This is true but not final. While consumers Nvidia cards have a 2 stream limit. A quick google search or Reddit search will yield a lot of people with a solution. But can’t be discussed on this forum
  9. If you are replacing your ram and your motherboard and CPU what is the budget for all combined? Since it does not seem like you have a lot of extras, you can save some on the motherboard and toss more into the CPU. I have built roughly 10-15 rigs for people and for unraid, my recommendation would be to go AMD Ryzen. Depending on your exact budget, you can get a (used) Ryzen 3900x for roughly $350 on eBay. For VMs you will want to allocate at least 4 cores to it. As for your VMs, you should check why your CPU usage is so high. I have a similar CPU with no such limitations (with no gaming VMs). If you do NOT do a gaming VM, you can use the beta35 (and some googling about drivers) in order to passthrough your current GPU to plex for hardware transcoding (really takes the load off the CPU for plex). If you do want a gaming VM and don't want to give plex a GPU for hardware transcoding, then I think a Ryzen platform CPU is your best bet. Unraid lets you allocate cores to specific dockers, VMs, etc. and AMD is the rising star as far as multicore tasks go. Hope this helps!
  10. It's included in 6.9 Ah! That makes sense now. Thank you very much!!! +1
  11. I am trying to install libffi but do not see it in Nerdtools. I'm on Unraid 6.9 Beta35. I tried uninstalling Nerdtools and reinstalling but it did not work. Am I missing something?
  12. I have not been able to identity the issue still. I have disabled a large number of dockers (exception of Plex, Swag, Bitwarden). I am thinking of perhaps moving Plex's app data folder to another drive to see if it really is plex which is the problem.
  13. Hello! I recently did a number of upgrades to my server and am out of other options. I have spent the better part of a couple weeks troubleshooting. I swapped my 120gb kingston SSD for a 500gb Samsung QVO for my cache drive. I put the 120 as a second cache pool. I moved all the data from to the array and then onto the new drive. I also upgraded from Beta25 to Beta35. Now, I have a large number of writes on my Cache drive (over 600,000 over a 24hour period). I tried moving my cache to a xfs img and even a directory but no help. The writing continues and it might be worse on the xfs than on the directory. Iotop does not reveal anything of significance really. Is there something i am missing? The only major difference I can see is that the SSDs are formatted differently (the 120 gb shows a MBR: 4KiB-aligned and the 500gb is MBR: 1MiB-aligned). I am aware that there is a massive uptick in this behavior when there is a stream on plex and when there is transcoding but I just assumed this is normal since it has to essentially write the transcoding file somewhere. Any help is appreciated! apollo-diagnostics-20201128-2153.zip
  14. It looks like I fixed it! The SMART was failing on the HDD connected via SAS breakout cable. After fixing it and reseating the LSI card, the problem was fixed!
  15. So it is the one plugged into the LSI card and that same HDD fails the SMART test. It makes me think it is a software/firmware glitch. Just got new SAS breakout cables.... will try again tonight and report back
  16. I have ordered new SAS to Sata Cables.... I had bought cheap ones.... hopefully the new cables will help. As far as the additional Mac address registration, helium levels, and smart reporting, no progress!
  17. Hello! I just did some upgrades to my server and now there has been some strange changes to my server. I installed a LSI 9207-8i and a CyberPower UPS. There are two problems that have now happened. 1) There is a second mac address registering to the internal IP of my server.... Unraid is 10.0.0.5 and I can access it... but it also shows a second mac address with the same IP. It is only there when the unraid server is on so it has to be something on there. I have tried: stopping all dockers, disabling docker, tried setting a static IP to unraid in settings. 2) Some of the drives connected to the LSI card have some problems it seems. SSDs are showing high CRC error counts and Hard drives, I cannot run SMART tests on. One is showing Failing Helium but the value is still 100 like all the rest. (western digital Reds). I have tried: Re-seat the card, upgraded LSI firmware to 20.00.07.00 which stopped the CRC count from increasing on SSDs. Any help on either problem is appreciated! Thank you! apollo-diagnostics-20201105-1047.zip
  18. Is upgrading to 6.9.0-beta25 all that is needed in order to fix this bug? I see from the change log it says this issue has been fixed. I currently have encrypted xfs on my array and my cache drive but from iotop -oa still shows loop2 writing excessively. I'm assuming I am just missing something obvious here.... I see one of the recommendation by @testdasi was to recreate the img to be docker-xfs.img as a work around.
  19. This helped alot! While the commands did not work for me, I found another comment in the thread discussing the Loop2 error and encrypted xfs for the array and btrfs on the cache. The solution for me was to convert the cache drive to encrypted xfs. So far, the writes and GBs written have returned to normal levels! (This triggered my memory of this cache issue coinciding with converting my array to encrypted xfs.)
  20. I have tried that command. Thank you for your reply! How would I look at the SMART? From googling it looks like I can just download the SMART Report? I have attached it here (removed serial number). SMART Report.txt
  21. Hello everyone! My server has been up for 11 days now... during that time there have been 55 million writes to my cache drive. Now as much as I would like to think my cache is doing its job... these seems rather excessive and the perfect way to kill the drive. How can I tell of this is part of the BTRFS format bug rather than something I have done incorrectly? Should I just try and switch my Cache format? I am running a Plex server which I have read on some of the related forums can cause larger cache writing but not this much to my knowledge...Thanks for any help! Solution: My array was encrypted XFS while the cache was btrfs which others have said makes the Loop2 bug worse. My solution was to convert the cache to encrypted xfs.... for me this has solved the excessive writes back down to normal levels! apollo-diagnostics-20200706-1608.zip
  22. That is true. I'll get rid of it.... I have been meaning to upgrade the cache drive to at 1TB just to have extra headroom.... I dont really fill up the cache as of yet but once I start during VMs, I have a feeling I will need more space! Thanks for all your help!!
  23. It is kinda stupid... but I basically have 2 folders both named Downloads... one on an unsigned device and one on the cache... the one on the cache is not in use really... the unassigned device one is the main one used.
  24. Sorry for the delay! I have been moving! I have attached the new ddiagnostics... I just took the docker offline... ran mover and then restarted docker. I also have now set Systems to Only. apollo-diagnostics-20200514-2046.zip
  25. Thank you so much! This has been driving me nuts since I was so worried about so much use on the hard drive! you are correct! the user share "Downloads" was moved to an unassigned device. I have switched this to cache-prefer as suggested as you are right. As for the Systems folder... I have set this to Cache prefer as you recommend! Hopefully this reduces the number of reads and writes to the array but with the description you gave (and the fact that the docker image lives there.... I am fairly certain you solved my problem! I have then run a mover as well. Thanks again! I will monitor the RW and see if the number continues to grow out of control!