Jump to content

Nickglott

Members
  • Posts

    14
  • Joined

  • Last visited

Posts posted by Nickglott

  1. 1 hour ago, Jurykov said:

     

    Thanks for this!  Also fixed my 'corrupted' database issues.  Saved me setting up again from scratch.

    This is not working for me, it did allow ZM docker to start but none of my cams are capturing and at the top it says "/dev/shm: 100%".

     

    Log file spits all kinds of errors... Any one else getting this?

     

  2. I recently upgraded my server from my Asus Deluxe X99 I7 5820K to an Asus Tuf Gaming X570-pro w/ an AMD 5600X.

     

    Everything else in the system is the same, 32GB DDR4 3200, 512GB NVME CACHE, 5X 8TB Seagate NAS for Array, 1TB Seagate for CCTV Storage as Unassigned Device.

     

    Just saw this in my log and the system has been running with no problems for about 3 weeks now. Would this just been an issue with new kernel of Unraid 6.9.0-RC2 (I had to use the next version of Unraid as the NIC on my x570 board was not supported by stable version) or maybe just a bios issue I am running the newest bios, v.3001 12/8/2020. Or is it just that this is a brand new CPU? I don't think it could be that the CPU is bad or going bad as it is brand new and have not had any other issues for past month.

     

    Please let me know what you guys think! :)

     

    -Nick

     

    Jan  2 14:06:04 TheBox kernel: mce: [Hardware Error]: Machine check events logged
    Jan  2 14:06:04 TheBox kernel: [Hardware Error]: Deferred error, no action required.
    Jan  2 14:06:04 TheBox kernel: [Hardware Error]: CPU:1 (19:21:0) MC24_STATUS[-|-|MiscV|-|-|-|CECC|Deferred|-|Scrub]: 0x894853c35d5bd3eb
    Jan  2 14:06:04 TheBox kernel: [Hardware Error]: IPID: 0x0000000000000000
    Jan  2 14:06:04 TheBox kernel: [Hardware Error]: System Management Unit Ext. Error Code: 27
    Jan  2 14:06:04 TheBox kernel: [Hardware Error]: cache level: L3/GEN, tx: GEN

     

  3. 17 hours ago, strike said:

    @Nickglott, @lespaul In your nginx.conf uncomment these lines: 

    
    gzip_vary on;
    
    gzip_proxied any;
    
    gzip_comp_level 6;
    
    gzip_buffers 16 8k;
    
    gzip_http_version 1.1;
    
    gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;

    Made my emby a lot faster, the sign in page is still slow but once you sign in it's faster. Also remember if you have configured your disks to spin down, there will be a delay to spin up any disk that's not spinning.

    Thanks, I added those, but it still seems to load pictures very slowly. I am beginning to think its Nginx that causes it since it goes through there with the reverse proxy. I have my disks set to never spin down and my appdata is running off a NVME. The media itself plays instantly its the HTML5 webserver that’s very slow behind it. Been tweaking it for about 6 days with no luck yet. Thinking about just opening the port and use the cert w/o the reverse proxy. When I get some more free time ill test that and see if I can still have letsencrypt get the cert for that domain and subdomain w/o using the Ngnix server to route it but have the subdomain go straight to the port all while keeping Emby Connect functional. 

     

    15 hours ago, ben-nl said:

     Yeah I built my reverse proxy based off that guide :(, but thank you! 

  4. Hello all,

     

    So I am in the process of securing my sevrer with SSL. Currently I have everything configured with letsencrypt and works. My only problem I seem to have is Emby. It works and it forwards http to https, the cert is good and everything works and loads but it is horribly slow. Not going through Nginx (straight ip+ssl port) it works just as expected except obviously an invalid cert from missing domain. So the problem has to lie within Nginx and/or the reverse proxy. Any help would be greatly appreciated and am wondering if anyone else has been having any issues like this.

     

    Here is my reverse proxy(replaced my domain with DOMAIN), I am not using the default but a file in site-confs named emby

    ##EMBY Server##
    server {  
        listen 443 ssl;
        server_name emby.DOMAIN.cc;
    
        root /config/www;
        index index.html index.htm index.php;
    
        ssl_dhparam /config/nginx/dhparams.pem;
        
        ###SSL Certificates
        ssl_certificate /config/keys/letsencrypt/fullchain.pem;
        ssl_certificate_key /config/keys/letsencrypt/privkey.pem; 
        
        ssl_session_timeout 30m;
        ssl_session_cache shared:SSL:50m;
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_prefer_server_ciphers on;
    	ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
    
    proxy_hide_header X-Powered-By;
    add_header X-Xss-Protection "1; mode=block" always;
    add_header X-Content-Type-Options "nosniff"  always;
    add_header Strict-Transport-Security "max-age=2592000; includeSubdomains" always;
    add_header X-Frame-Options "SAMEORIGIN" always;
    add_header 'Referrer-Policy' 'no-referrer';
    add_header Content-Security-Policy "frame-ancestors DOMAIN.cc emby.DOMAIN.cc;";
    
    proxy_hide_header X-Powered-By;
    proxy_set_header Range $http_range;
    proxy_set_header If-Range $http_if_range;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
    
        location / {
    		proxy_pass https://192.168.1.2:8446/;
    }
    }

    Here is my default in case that matters, it points to heimdall docker to hold all my links to all installed dockers and apps.

     

    ## Version 2018/01/29 - Changelog: https://github.com/linuxserver/docker-letsencrypt/commits/master/root/defaults/default
    
    # listening on port 80 disabled by default, remove the "#" signs to enable
    # redirect all traffic to https
    server {
    	listen 80;
    	server_name _;
    	return 301 https://$host$request_uri;
    }
    
    # main server block
    server {
    	listen 443 ssl default_server;
    
    	root /config/www;
    	index index.html index.htm index.php;
    
    	server_name _;
    
    	# all ssl related config moved to ssl.conf
    	include /config/nginx/ssl.conf;
    
    	client_max_body_size 0;
    
        location / {
            proxy_pass https://192.168.1.2:8445/;
            proxy_max_temp_file_size 2048m;
            include /config/nginx/proxy.conf;
        }
    }

     

  5. +1 

     

    I too would love to see this. I have been trying to fix my Cache I/O performance when my dockers/VM's/Web Interface timeout for 20-30 seconds sometimes longer when my windows VM is downloading at +75mb/s. I think this issue is cause by the disk(ssd) unable to keep up with background tasks along with +1000s of connections and processing the data inside an image file. My old setup had every sata port full(6- 4TB Hdd's in my array and 1 SSD as a cache). This is an ITX build so space in my node 304 is very valuable.

     

    New setup is as fallows

    4-8TB HDD Array via onbaord sata ports

    1-m.2 nVme 512gb SSD Cache (via PCI-E 3.0 x4[x16 slot])

    2-4TB (Hardware Raid1 via asmedia mini-pcie wifi adaptor slot pcie 2.0 x 1) for backups

    2-240GB SSD via onbaord sata ports for VM's

     

    I am currently waiting on my new 8TB drives to preclear so this setup has gone untested so far. My only issue with this setup is that I would like my (2)240GB ssd's to be on raid0 for my VM's/Domains share. Since my motherboard uses Intel RST it is not true hardware raid so unraid is unable to see it as a raid. I would use my 2port asmedia card to do this but since they are SSD's and that is a gen2 x1 card I would lose a lot of performance. Plus I would like my backup drive(s) to be redundant in Radi1. My current thinking of a workaround is to run my windows VM on one drive and use the second to run my other VM's/domins folder.

     

    The only other option I see is to move my nVme to my onboard m.2, loose 2 sata ports and use a pcie raid card that can support the bandwidth. I don't really want to do this because my m.2 slot is on the other side of my motherboard and I think I will run into heat issues along with the inability to service in a drive failure.

     

    In my case having a separate "cache" pool in raid0 is really my only other option. By the way, Yes I have 9 drives running inside a Node 304 ITX build :D.

×
×
  • Create New...