casperse

Members
  • Posts

    810
  • Joined

  • Last visited

Everything posted by casperse

  1. server { listen 443 ssl; listen [::]:443 ssl; server_name photo.doamin.dk,photos.domain.dk,piwigo.domain.dk,piwigo.domain2.dk; I tried combining them but got a strange error afterwards? nginx: [emerg] could not build server_names_hash, you should increase server_names_hash_bucket_size: 64 nginx: [alert] detected a LuaJIT version which is not OpenResty's; many optimizations will be disabled and performance will be compromised
  2. Maybee a stupid Q But is it okay to add multiple subdomains like this? server { listen 443 ssl; listen [::]:443 ssl; server_name photo.doamin.dk; server_name photos.domain.dk; server_name piwigo.domain.dk; And could I just add a piwigo.domain2.dk also? It might work but I dont want to go against the approved structure
  3. Ok did a separate install and so far so good... pretty fast on my cache drive One thing I think there need to be some seperate setup og write access to the config file? Warning: mkdir(): Read-only file system in /config/www/gallery/plugins/piwigo-videojs/include/function_sync2.php on line 157 Any input on how to solve this?
  4. Hi All Trying to find alternatives to the "DS Photo" page on my Synology using Unraid and dockers Looking here there are only 3 pages for this docker, is most people using something else? or does this just work great!? I have Nextcloud installed with MariaDB and it okay but not really a solution for a large TB photo/Video collection 😞 So if this is the solution, would it be okay to create a secound DB in my existing MarisDB docker for the needed DB to be used with Piwigo? Or better to install a separate Mariadb docker to only be used for this? I haven't really found any videos about the installation or usage of this, so please share your experiences Thanks
  5. No, but I have a idea about what could be causing this, but so far couldn't prove my theory In the instructions for this specific MB some of the PCie slots are shared in speed Reading the instruction it seem that the last slot PCIEX4_2 is shared with the M2M connector not the M2A! BUT - The funny thing is that I don't have anything installed in the M2M connector !!! So I want to try to remove the SATA connections and see if that will fix my PCIEX4_2 slot? Problem is that I then have to replace my HD controller and cables not something easy to do 😞
  6. Hi Everyone I know this is a older thread, but my questions is related to this problem So far I have been using the pass through disc to a VM, but then I loose a drive in the array to only this VM So would the best solution be to create a vdisk.img on a UAD drive? 2-3 TB in size? (Already have 1 UAD) Would I need to change anything in the VM config to get a size above 2TB? Is there anyway to create it as a fixed size taking up the space when creating the vdisk file?
  7. Yes its the same thing.... 🙂 set $upstream_app 192.168.0.12; set $upstream_port 8123; set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port; I just used the variable in the last line proxy_pass http://192.168.0.12:8123; But I can see that it is sort of working seem to be some difference between this reverse proxy and the one Synology sets up I get to the webpage Looking at the log from my app I can see a new error message: shared.webhookError 1 Very strange? I made this for the Synology VM: server { listen 443 ssl; listen [::]:443 ssl; server_name mydomain.dk; include /config/nginx/ssl.conf; # add_header X-Frame-Options "SAMEORIGIN" always; add_header Strict-Transport-Security "max-age=15768000; includeSubDomians; preload;"; client_max_body_size 0; location / { include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; proxy_pass https://192.168.0.10:5001; proxy_max_temp_file_size 2048m; } } Synology domain looks to be working perfectly, any changes that I need to make? Its for a domain not a sub domain and I used the template for NextCloud as a base
  8. Yes I see the docker name should be the IP (Making this to complicated) I tried but it doesn't seem to work include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_app 192.168.0.12; set $upstream_port 8123; set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port; I tested and its working accessing: http://192.168.0.12:8123/lovelace/default_view (The domain is getting cert. log file and I tried doing the mysub.* wildcard but with no effect
  9. Hi All I am moving my last reverse proxy from my Synology box (Built in funct. with lets encrypt) Already have it working for all my dockers! and the instruction to setup and use the special "Proxynet" in Docker But my last servers are running as VM's not dockers so I cannot use the Network type: "Proxynet" I have two servers running as VM left with fixed IP's (Both virtual lan on Br0) Found the template for Home assistant: # make sure that your dns has a cname set for homeassistant and that your homeassistant container is not using a base url server { listen 443 ssl; listen [::]:443 ssl; server_name home.mydomain.dk; include /config/nginx/ssl.conf; client_max_body_size 0; # enable for ldap auth, fill in ldap details in ldap.conf #include /config/nginx/ldap.conf; # enable for Authelia #include /config/nginx/authelia-server.conf; location / { # enable the next two lines for http auth #auth_basic "Restricted"; #auth_basic_user_file /config/nginx/.htpasswd; # enable the next two lines for ldap auth #auth_request /auth; #error_page 401 =200 /ldaplogin; # enable for Authelia #include /config/nginx/authelia-location.conf; include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_app homeassistant; set $upstream_port 8123; set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port; } } But not sure how to point it to a specific mysub.mydomain.dk? Or adding the IP (I would prefer not to have any mysub.* as wildcard have 4 different domains added (The work fine) Tried looking through the posts but so far have not succeeded getting this working (Trial & Error) Hooping to adjust the above to work with Synology port 5001 https running as a VM Thanks!
  10. I did notice a big improvement in speed (Fast NVMe!) having Plex metadata and thousands of covers scrolling through a media library I have a near instant load - I also like the animation of scrolling through e media file (Generated for each media file if you have it enabled - hundred of Gigs). I can recommend buying the biggest NVMe you can afford - I went through 3 increasing the size (Would do it again if it was possible! but 2TB is max today) @mgutt is correct I have now have placed the following on my NVMe cache drive: appdata domains and the /mnt/cache/system/docker/docker.img Works great!
  11. Anyone who can explain above situation? what would happen, is this the drawback of getting"overflow" from cache to array if you use static /cache/ path?
  12. Hi Everyone I needed more NIC's and until I get 10gbe card I wanted to use one more 4xNIC intel card! Have I configured this wrong? My cards: The 4 from the "Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)" works fine! But only the first two of the: "Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)" works? I have created the Stub like this: It's like the last 2 NIC's are deactivated? I can see a short connection flash orange but no green activity led? It's almost like I only have half of the card? - Could this be some PCi Express limitation of my MB? Shared lanes? or did I just configure this wrong? "8086:10e8,8086:1521"
  13. And no changes I "could" have done will make any difference to how its used in Unraid (Yes did the above changes, so maybee reset it to default values)
  14. So I need to do nothing...? Wouldn't it just be the right thing to do? (Setting boot : Disabled skipping this every time I reboot the server) And is hoot-swap working if I don't set the "removable media support" to YES? And do I get all the drives listed during boot if I don't set any info Sorry I was just sure I did something on the old controller?
  15. Hi All PLEASE HELP!!! I now have the opportunity to borrow a LSI SAS9305-24i card (JUST FOR TESTING) and replace it with my LSI Logic SAS 9201-16i (I currently use this 16i + 8 x sata on the MB cabled to = 2 x Mini-SAS 36 Pin SFF-8087 on the back plane) So this should have native IT mode (As standard) and be able to replace the existing SATA ports currently used on my MB (8 x sata) But I had to see if the Firmware was updated and looks like it is... BUT I can not remember if I need to change anything else in the config tool? I was looking to remove any boot options to drives? - Unraid don't need this (Just a bunch of drives) - BIOS is just access to the Bios of the card right? Also there was some options to display during boot of the drives.... And do I have to set "Removeable Media Support" to get Hot Swap functionality of my drives like I have today? Sorry this is so long time I tinkered with a LSI card (They just work!)
  16. UPS! Thanks @trurlfor reminding me! - Just checked and after my last server migration I forgot I changed it back so it was set to 4 hours! So back to 1 hour now (Which I think is the default value)
  17. I just have "Use default" I think that's 1 hour, so should be plenty
  18. Since this only monitor the Plex service I think its ok to spin-up all drives up for a person accessing the system (Its a VIP service ) I was wondering what the timeout of this "Spin-up state" was? Since I can see accessing the Plex service and any library folder in Plex will draws +10% CPU every time! Even just accessing my Tautulli app on my iphone would push the script to activate wich is ok and by design! But what happens after you have accessed the library and you start to scroll through titles (They are all on the cache and loads to the browser cache) so the %CPU would go down again - how long time using "browsing titles" before it would spin down again? I use your test script to monitor activity of the script (4%): while true; do plex_cpu_load=$(docker stats --no-stream | grep -i plex | awk '{sub(/%/, "");print $3}') echo $plex_cpu_load if awk 'BEGIN {exit !('$plex_cpu_load' > 4)}'; then echo "Container's CPU load exceeded threshold" fi done It's a great way to see how and when it triggers
  19. Finally - Thanks for all your help! - I think it's finally where I can use this script without spinning my drives up 24/7 But maybe I should make it around 4-5% - I still get some spikes after 3 min: 3.71 - Container's CPU load exceeded threshold And I think Plex activity of any user going into any media library will go way above 4%...even 10% in my testing Could be interesting to see if this actual have any noticeable impact on my electric bill since all 22 drives (Not including the 2 x parity drives) would be spin up during any playback of any movie. I actually did the separate UAD drive for frequent access in order not to do spin ups but that was 24/7 I think this is reasonable trade off... Thanks again Shouldn't it keep them spinning 2-3min between getting threshold again? keeping the drives spin-up?
  20. WOw = 3 min! Script Starting Oct 24, 2020 11:59.04 Full logs for this script are available at /tmp/user.scripts/tmpScripts/Plex Preloader v0.9/log.txt Available RAM in Bytes: 11931406300 Amount of Videos that can be preloaded: 195 ....*.mkv*.srt tr: write error: Broken pipe tr: write error cut: write error: Broken pipe Script Finished Oct 24, 2020 12:02.09 Would like to do a test of the impact doing this but struggling how to do accomplish it? Needs to be "remote" for transcoding to kick in, how did you go about this during your dev/testing? Also some people have "remote transcode settings" fixed, would they overrule this? like 720P 4Mb/s and make the Preloader obsolete? Sorry for the many Q's again great work! your work in so many ways are really moving the "Media tweak" post forward!
  21. Oh yes Stuff happens especially when we like us - go and do special tweaks (Stuff also happened for me, but the backup script from @mgutt is great and I use that for both emby and Plex (Appdata is huge! incremental backup is great) and of course the VM/Docker backup apps from the Unraid app community) This issue got me thinking, what would happen if you have the appdata folder as: And you have all your Appdata path in the docker changed to /cache/ But you do have the global share settings set: And you have a overflow of Plex metadata to appdata? (Prefer setting!) Would it write to the array? The path from the docker would only see the cache drive, so new data would not be available? How would this work? (Don't really want to test this in practice ) Sorry if this is a stupid Q...
  22. Yes I pretty much have /Cache/ written everywhere now :-) Funny I said that it looked like a heartbeat didn't know there actually was one! (I will disable it when there is no traffic and test again) Thanks!
  23. Maybe it's not the size of the srt files but the fact that they are located across 24 drives? Anyway your code is solid my server doesn't crash and it still run perfectly! just take some time LOL (I kept it at default 50%)
  24. All of the srt files? for Movies & TV shows?! WOAW I have a average of 3-4 srt files (Nordic TV sometime also german close to DK - for each TV show and most movies) how does it have enough space left for loading the right subtitles? Just did a count on TV shows alone and I have around +40K srt files - so now I understand why it did run "forever" Average srt file is 30-45K = 1G Ram (TV only)