talmania

Members
  • Posts

    205
  • Joined

  • Last visited

Posts posted by talmania

  1. 4 hours ago, MAM59 said:

    Yeah, I have the same over here. Just retired it in favour of a ProArt x670 Wifi with a 7750x and DDR 5. But the same flaw: if you use slot 2, slot 1 drops to 8x and the other 8x go over to slot 2.

    I think, it was the same split for the x570 but with 4x instead of 8x. as you said.

     

    Sadly its not a simple "port 1 uses lanes 1-4 and port 2 uses lanes 5-8" thing. The existing lanes all go to the chipset of the card which handles the 2 ports. Reserving bandwith for both. So I am afraid, it will effently lower port 1's throughput even though port 2 is not used at all. But I might also be wrong.

    Anyway these are "server only" cards, they are not designed to be run on cheap motherboards with limited lane counts.

    (Server boards usually come with real 8x slots)

     

    Well that was definitely it. Got the new build in a state where I could do some preliminary testing and found that writing to cache backed user shares with another same model ConnectX-4 was a sustained 450-525MBps and when I wrote directly to the cache drive I was getting sustained 935MBps to 1.05GBps. 

  2. 1 hour ago, MAM59 said:

    Yeah, I have the same over here. Just retired it in favour of a ProArt x670 Wifi with a 7750x and DDR 5. But the same flaw: if you use slot 2, slot 1 drops to 8x and the other 8x go over to slot 2.

    I think, it was the same split for the x570 but with 4x instead of 8x. as you said.

     

    Sadly its not a simple "port 1 uses lanes 1-4 and port 2 uses lanes 5-8" thing. The existing lanes all go to the chipset of the card which handles the 2 ports. Reserving bandwith for both. So I am afraid, it will effently lower port 1's throughput even though port 2 is not used at all. But I might also be wrong.

    Anyway these are "server only" cards, they are not designed to be run on cheap motherboards with limited lane counts.

    (Server boards usually come with real 8x slots)

     

    Well I guess this just gives me extra incentive to finish up my Z690 build I’ve been lazy about. Thankfully it does have an 4.0 x16 slot that will operates at x16. Time to test…

  3. 6 hours ago, MAM59 said:

    You should better have bought a Connect X-3 instead (the latest edition).

    This is because the X-4 needs 8 Lanes for the PCIe slot, else it will slow down by half.

    The 450Mb you see  tells me, that you have put the card either into the wrong slot (4x only), or you bios settings somehow prevent the usage of the full lanes.

    With 10G you should expect to see ~1,1Gb/s not much less.

     

    Look at the manuals of your Motherboards to see if there is an appropriate free slot (watch out for (*) notes many many have restriction with several combinations.) Mostly there is only one real 16x slot which is used for the Grafics card.

     

    What you need is a Connect X-3 with PCIe3.0x4. (Warning! Many models of this card are PCIe2.0x8, these wont work well the same...)

     

     

    Thank you for responding!!  I think you may be onto something here...the ConnectX4 I purchased is actually a 25Gbe card (MCX4121A-ACAT)  as I'm awaiting the arrival of a 100gbe switch here shortly.  The motherboard is a ASUS TUF gaming x570-Plus WiFi and the manual does show 2x PCIe 4.0 x16 slots but when dual vga/pcie cards are used the 2nd slot is PCIe 4.0 at x4 instead of x8.  OK bear with me here and please correct me if I'm totally wrong (probably am!).  I could not locate a board diagram for the card but assuming it's 50gb (2x 25gb ports) across the 8 lanes.  Each PCIe 4.0 lane is 2GB/s of bandwidth.  PCIe 3.0 is 1GB/s.  So assuming the card is split 4 lanes for each port that's 4GB/s for each port or 4000 MB/s with 1000MB/s for each lane for each port.  If that's running at half speed with the fewer x4 lanes (2 per port??) that's 500MB/s  per lane or 1000MB/s for each port but I'm guessing it's split between send and receive??  And holy hell there's the problem!  Do I have that right?  The 7.5Gbps result from iPerf would be both lanes at 500MB/s to get to 1000MB/s and the 937.5MB/s result from iPerf with overhead.  

  4. I've ran 10gb on my unraid for a really long time now and have finally spent some time trying to optimize my transfer speeds.  Some history first:

     

    Initial Config & Revision:

    Workstation:

    AMD Ryzen 3500

    Intel x540-T1 with CAT5e to Brocade 10gb switch

    Transfer speeds:  wildly fluctuating from 450MB/s to 70MB/s

    Workstation Revision 3/2024:

    ConnectX-4 SMF Fiber

    Consistent ~400-480MB/s transfer speeds

     

    UNRAID:

    Supermicro X10DRH-IT

    Dual E5-2670v3

    128GB RAM

    Intel x540 Dual Port 10Gbase T

    Cache:  Dual Crucial P3 Plus 4TB PCIe 4.0 x4 in a PCI3.0 x4 slot (Mirror--theoretically 4100MB/s write)

     

    After seeing lots of variances in speed since the initial 6.x upgrade when writing to cache and never being super satisfied with transfer speed (for reference I would get ~400MB/s peaks pre 6.x)  Well I decided I wanted to do something about it and  I ended up running SMF fiber under my house, replaced my NIC with a Mellanox ConnectX-4 and now I'm seeing consistent~400 to 480MB/s transfer speeds to the cache drive.  When I run iperf3 to the server itself from my workstation I'm seeing on average about 7.5Gbps (~935MB/s).

     

    Network is flat, there's a single nic enabled on Unraid with no teaming, jumbo frames or any other tweaks done.  I'm curious what step I should take next to troubleshoot and see if I can't get a better transfer rate?  Thanks for any and all advice.

     

     

  5. Thank you for sharing your config. I was having a helluva time getting swag to work with Immich inspite of the fact I host tons of other apps and domains and sure enough when I used 8080 (instead of the 8081 I was redirecting to) it started working. 
     

    Really appreciate you sharing your notes here as this one has been a struggle. Thank you!

    • Like 1
  6. 12 hours ago, alturismo said:

    to make it short, yes this works, may rather just use the LAN IP instead NAME in the proxy conf

     

    here a sample running adguard on a sep mashine

    image.thumb.png.451657a2b328090e9021273b57d58dc6.png

     

    Thank you!  I optimized my search and actually found your response earlier in this thread with a similar question.  It was EXACTLY what i needed and helped immensely.  Thank you very much for helping!

    • Like 1
  7. I'm hoping someone can help me with my thought process and see if I'm off base or what I'm thinking is possible with my current swag configuration:

     

    It's working perfectly for all my various containers on Unraid---however I have a new use case where I want to reverse proxy to another host on the local network (completely separate network outside the dockernetwork I'm using).  Routing is going to be a problem no? 

     

    My goal is to reverse proxy external traffic looking for server1.myhome.com to hit SWAG and then proxied to server1 internal IP address and port.  My guess is that the routing from the docker network to the local area network (which unraid sits on as well) is going to the problem?

     

    Would I use the subdomain template to reverse proxy this request?  Thanks for any advice and insight!

  8. Hi all---I replaced my cache drives the other day and found when i turned back on dockers that nothing was listed at all.  So I added back in my templates and that seemed to work just fine save my swag docker.

     

    Long story short, I ended up renaming the entire /config folder (which was a LONG time in use from very early letsencrypt days) and  and seeing if a complete reinstall worked.  Got caught with the rate limit of letsencrypt.  Is there a way I can move over the certs that were generated in the old /config structure?  Thanks!

     

    RESOLVED:  In case anyone comes across this I came across a thread about CA Backup/Restore and completely forgot the app was running on my system.  Did a restore of everything and it's working perfectly now.

  9. I've been running the following for a LONG time (easily 8+ years) and am looking at doing more (virtualization & conversion, maybe gaming) from my system.  Here's what I currently have:

     

    Supermicro X8SIL

    x3470 Xeon

    16GB ECC

    4220 Norco 20x 3.5" HS

    2x MV8's

    10TB Parity

    10x 3-4TB legacy drives that are getting phased out

    5x 10TB drives

    2x 512GB Samsung EVO SSD Cache

     

    My only real requirement is virtualization support, ability to handle more than 1-2 converting streams at a time and my MOST important is Lights out management (iLO, DRAC name your flavor).  My thought was keep the 4220 (upgraded fans & PSU) since I've got the open bays and will be further consolidating legacy drives but everything else is up for replacement.

     

    Thanks for any and all suggestions!!

     

     

  10. 14 hours ago, cheesemarathon said:

    All info for the bitwarden is stored in your appdata folder. All I am doing is backing up my entire appdata folder every day using duplicati. However, you could use this image to create incremental backups of the sqlite database. In theory if you backup the app data folder whilst data is being written to the database you could create a corrupted backup, hence the need for the additional container to run the sqlite backup command. However, im the only user on my instance and I run my backups late at night so it's not been an issue for me. Yet...

    thanks cheese!! 

  11. Another question for cheesemarathon or others:  how are you backing up Bitwarden?  I've read multiple GitHub recommendations but a lot of them refer to pulling in a docker that contains the sqlite3 bits etc.  Wondering how you would recommend backups cheese?  Thanks!

  12. 5 hours ago, cheesemarathon said:

    This uses a docker image built from dani_garcia's repo, so any changes/improvements that get made there will appear in an update here. So in short, if and when Duo support arrives, it will be supported here on unRAID

    Thanks Cheese!!  Great to hear--appreciate your work!

  13. Question regarding Bitwarden---I can see there's a bitwarden_rs here that's forked from the one by dani_garcia.  The one by dani_garcia has a Duo version that's what I can only assume is working towards adding the Duo functionality (I'm not a GitHub/programmer so it's mostly Greek to me) but curious if there are plans to ever have Duo working in the Unraid published version?  I've subscribed to Bitwarden but would rather host it myself and using Duo would be really nice.

  14. I've found that when accessing Nextcloud via my iOS phone client externally that it takes approximately 30-60 seconds to "wake up" and connect with the server--until then it appears like a non-responsive hung app and clicking various areas of the Nextcloud iOS app do nothing.  Internally it connects immediately via the app and website; externally it connects immediately via a browser--this change probably happened close to the 15.0 upgrade.  Anyone seen this before and have any insights or places to start looking?  Thanks!

  15. kizer is spot on--the community here is incredible--I've had my butt saved by incredible members here.  I've been using the same license (and same USB key now that I think about it hmmm) since buying a 1TB drive was considered HUGE and cost prohibitive.  If you look at the cost of my license over the last decade plus, it's amounted to absolute peanuts.  

    • Like 2
  16.  

    EDIT:  Sometimes typing something out can REALLY help.  It occurred to me no more than 10 minutes after publishing this post that these lines jumped out at me:

     

    		#auth_basic "Restricted";
    		#auth_basic_user_file /config/nginx/.htpasswd;

    I commented out those extra authentication steps and IT WORKS!  I guess I was close and not as stupid as I thought--would love to hear best practices moving forward and if this approach in sites-conf/default is preferable to /proxy-conf/app.subdomain.conf and what the distinction is.  Leaving this here in case it helps someone down the road.

     

    I've been running letsencrypt perfectly with Nextcloud for years now thanks to this awesome community---I've recently decided to try more dockers than just Plex & Nextcloud (specifically Bitwarden but my questions are primarily around letsencrypt) and I'm trying to learn how to publish more than one docker.  I've jumped back to square one and learning about the configs etc (from watching spaceinvader videos and reading articles at linuxserverio) and feel like I'm SUPER close but missing something subtle somewhere (or maybe just think I'm closer than I actually am).

     

    I've got DNS working for a custom domain (was already working for nextcloud) for bitwarden, created the new docker network and moved the containers over to that network for communications (as I understand it is necessary) and then passing the bitwarden subdomain to letsencrypt successfully after working with the nginx/proxy-confs and creating a bitwarden.subdomain.conf file and then realizing there's default in nginx/site-confs that looks like the following and when I would hit bw.mydomain.com externally I'd land on the nextcloud landing page.

     

    server {
    	listen 80;
    	
    	listen 443 ssl;
    	
    	root /config/www;
    	index index.html index.htm index.php;
    	
    	server_name nextcloud.mydomain.com;
    	
    	
    	###SSL Certificates
    	ssl_certificate /config/keys/letsencrypt/fullchain.pem;
    	ssl_certificate_key /config/keys/letsencrypt/privkey.pem;
    	
    	###Diffie–Hellman key exchange ###
    	ssl_dhparam /config/nginx/dhparams.pem;
    	ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
    	
    
            ###Extra Settings###
    	ssl_prefer_server_ciphers on;
    	ssl_session_cache shared:SSL:10m;
    
            ### Add HTTP Strict Transport Security ###
    	add_header Strict-Transport-Security "max-age=63072000; includeSubdomains";
    	add_header Front-End-Https on;
    
    	client_max_body_size 0;
    
    	location / {
        proxy_pass https://172.23.1.31:444/;
      }
    }

    So I tried several things from that point:

    1) added a line to reference the proxy-confs (with the *.subdomain.conf file I created) with an include statement in the site-confs/default (which ended up in errors in letsencrypt log on startup).

    2) ignored the proxy-confs all together and added the following to my sites-confs/default after the above code:

     

    server {
    	
    	listen 80;
    	
    	listen 443 ssl;
    	
    	server_name bw.mydomain.com;
    	
    	include /config/nginx/ssl.conf;
    	
    	client_max_body_size 0;
    
    
    	location / {
    		auth_basic "Restricted";
    		auth_basic_user_file /config/nginx/.htpasswd;
    		include /config/nginx/proxy.conf;
    		resolver 127.0.0.11 valid=30s;
    		set $upstream_Bitwarden Bitwarden;
    		proxy_pass http://172.23.1.31:8343;
      }
    }

    I no longer land on the nextcloud landing page but get a generic prompt for a username and password for authentication and get no further (I can access the docker internally no problem).  I've tweaked more than a few 

     

    I'm curious what best practice is?  Is it to use the proxy-confs or to use site-confs to address the various services?  In my research I've seen it done both ways...thanks in advance for any and all advice/help!