Nickglott

Members
  • Posts

    14
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Nickglott's Achievements

Noob

Noob (1/14)

0

Reputation

  1. That worked. Thanks added FYI for others I added --shm-size="16G"
  2. This is not working for me, it did allow ZM docker to start but none of my cams are capturing and at the top it says "/dev/shm: 100%". Log file spits all kinds of errors... Any one else getting this?
  3. I recently upgraded my server from my Asus Deluxe X99 I7 5820K to an Asus Tuf Gaming X570-pro w/ an AMD 5600X. Everything else in the system is the same, 32GB DDR4 3200, 512GB NVME CACHE, 5X 8TB Seagate NAS for Array, 1TB Seagate for CCTV Storage as Unassigned Device. Just saw this in my log and the system has been running with no problems for about 3 weeks now. Would this just been an issue with new kernel of Unraid 6.9.0-RC2 (I had to use the next version of Unraid as the NIC on my x570 board was not supported by stable version) or maybe just a bios issue I am running the newest bios, v.3001 12/8/2020. Or is it just that this is a brand new CPU? I don't think it could be that the CPU is bad or going bad as it is brand new and have not had any other issues for past month. Please let me know what you guys think! -Nick Jan 2 14:06:04 TheBox kernel: mce: [Hardware Error]: Machine check events logged Jan 2 14:06:04 TheBox kernel: [Hardware Error]: Deferred error, no action required. Jan 2 14:06:04 TheBox kernel: [Hardware Error]: CPU:1 (19:21:0) MC24_STATUS[-|-|MiscV|-|-|-|CECC|Deferred|-|Scrub]: 0x894853c35d5bd3eb Jan 2 14:06:04 TheBox kernel: [Hardware Error]: IPID: 0x0000000000000000 Jan 2 14:06:04 TheBox kernel: [Hardware Error]: System Management Unit Ext. Error Code: 27 Jan 2 14:06:04 TheBox kernel: [Hardware Error]: cache level: L3/GEN, tx: GEN
  4. Thanks, I added those, but it still seems to load pictures very slowly. I am beginning to think its Nginx that causes it since it goes through there with the reverse proxy. I have my disks set to never spin down and my appdata is running off a NVME. The media itself plays instantly its the HTML5 webserver that’s very slow behind it. Been tweaking it for about 6 days with no luck yet. Thinking about just opening the port and use the cert w/o the reverse proxy. When I get some more free time ill test that and see if I can still have letsencrypt get the cert for that domain and subdomain w/o using the Ngnix server to route it but have the subdomain go straight to the port all while keeping Emby Connect functional. Yeah I built my reverse proxy based off that guide , but thank you!
  5. Hmm, I assume it has to do with the revsere proxy settings then. Anyone else using it with emby that was able to get it not to load slowly?
  6. Hello all, So I am in the process of securing my sevrer with SSL. Currently I have everything configured with letsencrypt and works. My only problem I seem to have is Emby. It works and it forwards http to https, the cert is good and everything works and loads but it is horribly slow. Not going through Nginx (straight ip+ssl port) it works just as expected except obviously an invalid cert from missing domain. So the problem has to lie within Nginx and/or the reverse proxy. Any help would be greatly appreciated and am wondering if anyone else has been having any issues like this. Here is my reverse proxy(replaced my domain with DOMAIN), I am not using the default but a file in site-confs named emby ##EMBY Server## server { listen 443 ssl; server_name emby.DOMAIN.cc; root /config/www; index index.html index.htm index.php; ssl_dhparam /config/nginx/dhparams.pem; ###SSL Certificates ssl_certificate /config/keys/letsencrypt/fullchain.pem; ssl_certificate_key /config/keys/letsencrypt/privkey.pem; ssl_session_timeout 30m; ssl_session_cache shared:SSL:50m; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA'; proxy_hide_header X-Powered-By; add_header X-Xss-Protection "1; mode=block" always; add_header X-Content-Type-Options "nosniff" always; add_header Strict-Transport-Security "max-age=2592000; includeSubdomains" always; add_header X-Frame-Options "SAMEORIGIN" always; add_header 'Referrer-Policy' 'no-referrer'; add_header Content-Security-Policy "frame-ancestors DOMAIN.cc emby.DOMAIN.cc;"; proxy_hide_header X-Powered-By; proxy_set_header Range $http_range; proxy_set_header If-Range $http_if_range; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; location / { proxy_pass https://192.168.1.2:8446/; } } Here is my default in case that matters, it points to heimdall docker to hold all my links to all installed dockers and apps. ## Version 2018/01/29 - Changelog: https://github.com/linuxserver/docker-letsencrypt/commits/master/root/defaults/default # listening on port 80 disabled by default, remove the "#" signs to enable # redirect all traffic to https server { listen 80; server_name _; return 301 https://$host$request_uri; } # main server block server { listen 443 ssl default_server; root /config/www; index index.html index.htm index.php; server_name _; # all ssl related config moved to ssl.conf include /config/nginx/ssl.conf; client_max_body_size 0; location / { proxy_pass https://192.168.1.2:8445/; proxy_max_temp_file_size 2048m; include /config/nginx/proxy.conf; } }
  7. +1 I too would love to see this. I have been trying to fix my Cache I/O performance when my dockers/VM's/Web Interface timeout for 20-30 seconds sometimes longer when my windows VM is downloading at +75mb/s. I think this issue is cause by the disk(ssd) unable to keep up with background tasks along with +1000s of connections and processing the data inside an image file. My old setup had every sata port full(6- 4TB Hdd's in my array and 1 SSD as a cache). This is an ITX build so space in my node 304 is very valuable. New setup is as fallows 4-8TB HDD Array via onbaord sata ports 1-m.2 nVme 512gb SSD Cache (via PCI-E 3.0 x4[x16 slot]) 2-4TB (Hardware Raid1 via asmedia mini-pcie wifi adaptor slot pcie 2.0 x 1) for backups 2-240GB SSD via onbaord sata ports for VM's I am currently waiting on my new 8TB drives to preclear so this setup has gone untested so far. My only issue with this setup is that I would like my (2)240GB ssd's to be on raid0 for my VM's/Domains share. Since my motherboard uses Intel RST it is not true hardware raid so unraid is unable to see it as a raid. I would use my 2port asmedia card to do this but since they are SSD's and that is a gen2 x1 card I would lose a lot of performance. Plus I would like my backup drive(s) to be redundant in Radi1. My current thinking of a workaround is to run my windows VM on one drive and use the second to run my other VM's/domins folder. The only other option I see is to move my nVme to my onboard m.2, loose 2 sata ports and use a pcie raid card that can support the bandwidth. I don't really want to do this because my m.2 slot is on the other side of my motherboard and I think I will run into heat issues along with the inability to service in a drive failure. In my case having a separate "cache" pool in raid0 is really my only other option. By the way, Yes I have 9 drives running inside a Node 304 ITX build .
  8. You could try this, not sure it will help but worth a try I suppose since it semi modifys rdp files. https://github.com/stascorp/rdpwrap/releases
  9. I just saw the screenshot, try to make a local account. I'm guessing this its not the case but will post a link anyways. https://tinkertry.com/how-to-change-windows-10-network-type-from-public-to-private
  10. Is it a Microsoft account or a local account? Aslo is the network connect considered "private" it can not be public for rdp you may need to switch that over. Try adding another user the opposite of the one that is not working ie local or microsoft. If its a local account i think its localaccount/username, obviously if its a microsoft account its the full email address. If that wont work add a microsoft account to store your digital license make sure in activator is says linked to your microsoft account. After re-install and on first boot if it is not activated use the wizard and say "I recently changed my hardware". The one VM activated automatically my other I had to use the wizard. Of course the beauty of it being a VM, just make another VM install and try it.
  11. Very excited for this, hopping the skylake issues get sorted out so I can run quicksync in a windows VM environment.
  12. With all the research I have done so far I am think its might not be accomplishable since unRAID is very striped down are bare. Running Emby is a Linux VM might be possible to use either quicksync or vappi but at least for my sake I do not think I can without have my onboard intel 530 graphics card passthorough to the linux os as obviously its needed for quicksync and my nvidia card is used by my windows 10 VM and I only have 1 pcie slot. I know they are working to have this be able to be done but until then I think I am stuck. If someone that has more knowledge can chime in it would be greatly appreciated.
  13. I just started using unraid about a week ago. I am trying to accomplish this myself. I don't think GPU passthough is what needs to be done. Emby and FFMPEG both support Quicksync and VAAPI. They are both currently baked in and active within Emby. I think the problem is missing prerequisites. I have only about a weeks worth of understanding of dockers so I don't know if they missing parts to accomplish this are within the docker or the unRAID system. I have not used Linux in such a long time but I think we need to add intel packages to the system for the FFMPEG dependency's. Link to information: https://trac.ffmpeg.org/wiki/HWAccelIntro
  14. Any word on this? I am fairly new to unraid but emby supports both VAAPI and Quicksync. I think from my understanding is that Plex will never implement hw encoding and that is shame. I am currently in the process of trying to get HW encoding working for emby running in a docker. It is supported by both emby and ffmpeg but i think some dependency are missing.