Jump to content

alturismo

Moderators
  • Posts

    7,453
  • Joined

  • Last visited

  • Days Won

    80

Everything posted by alturismo

  1. here is a working from letsencrypt, may some lines to add in extras in your config, i dont use nginx proxy manager anymore so cant tell for sure may that helps ... fetched it from onlyoffice site from nginx sample there and builded to work with letsencrypt my working conf for usage nextcloud with external onlyoffice, when using the lsio nextcloud docker. # Use this example for proxy HTTPS traffic to the document server running at 'backendserver-address'. # Replace {{SSL_CERTIFICATE_PATH}} with the path to the ssl certificate file # Replace {{SSL_KEY_PATH}} with the path to the ssl private key file map $http_host $this_host { "" $host; default $http_host; } map $http_x_forwarded_proto $the_scheme { default $http_x_forwarded_proto; "" $scheme; } map $http_x_forwarded_host $the_host { default $http_x_forwarded_host; "" $this_host; } map $http_upgrade $proxy_connection { default upgrade; "" close; } server { listen 443 ssl; listen [::]:443 ssl; server_name office-alturismo.*; server_tokens off; include /config/nginx/ssl.conf; add_header Strict-Transport-Security max-age=31536000; # add_header X-Frame-Options SAMEORIGIN; add_header X-Content-Type-Options nosniff; location / { include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $proxy_connection; proxy_set_header X-Forwarded-Host $the_host; proxy_set_header X-Forwarded-Proto $the_scheme; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://192.168.1.93; } }
  2. small update, switched now fast to the official guacd && guacamole docker and yes, my issue is fixed in 1.2 (remote app connections closing after 2 mins) and i use mysql (mariadb) and did not run the firstrun.sh, any reason why its needed ? any changes in database ?
  3. @Taddeusz may a version upgrade in sight, 1.2 released and i hope my remote app issue would be solved
  4. @gacpac 50mb sounds almost like you use webdav to upload in case its webdav u use, windows is limited to 50mb by default when using webdav, u can increase the size by editing HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WebClient\Parameters key FileSizeLimitInBytes change to max 4294967295 (4gb) only if webdav is the protocol u try to use
  5. @Djoss thats what i did to get it working (i often have to use this way for some scripts) already and thanks for pointing in case others have this now too.
  6. @Djoss hi, looks like your update today of better amc custom handling broke something (at least here) sample of my custom line --lang de --def ignore=[.](ts|idx2)$ --def plex=192.168.1.71:xxx --def emby=192.168.1.75:xxx -exec bash /config/postprocess.sh {historic.f} {f} {n} {t} {s} {e} {seconds} error respond [amc] Invalid language code: [.](ts|idx2)$ --def plex=192.168.1.71:xxx --def emby=192.168.1.75:xxx -exec bash /config/postprocess.sh {historic.f} {f} {n} {t} {s} {e} {seconds} as u see the language code is from def ignore ... so looks like it uses lang '[.](ts|idx2)$' instead lang de as note, worked fine before
  7. @uek2wooF may try the prepare function, add a custom vm and use the xml form which is created by macinabox docker dont forget to create your vdisk then too
  8. may also still open here, exit in webterminal does reopen a new session instead closing it, 99 of 100 at least ... some small glitches i "feel", hitting stop on my win10 vm's doesnt stop them anymore, i have to either remote login and shutdown or use virsh shutdown ...
  9. hi, may someone else came accross stopping the array and it hangs i tried to install a macinabox install for some purpose, so docker created etc ... now, when i hit stop array, mashine stays very long at looks like unmounting a UD disk takes very long, also UD shares etc, log attached alsserver-syslog-20200620-0909.zip
  10. if its meant to just update something and then its changed in the background, ok, that seemed to work.
  11. ok, seems i was too fast, i guess its another issue why this happens when i read changelog again webgui: VMs: change default network model to virtio-net this virtio-net does not exist here, i only have those so i have chosen the virbr0 which is 192.168.122.0/24, whatever this is about cause i never added it. so, how can i add this virtio-net to unraid ? as note, when i add a new vm i also dont have the option, its still default to br0 and i can choose from br0, br1 or virbr0 only.
  12. may another question about VM and virtio network, i see this correct that they have to be on their own subnet ? its weird here now due my vm's are on 192.168.122.x while my regular net is 192.168.1.x i never added or changed anything on virtio, so its default i guess. is this a must have ? cause i guess i ll run into issues when i try to rdp to my VM from non unraid mashines. ### Update, confirmed, i cant reach any win10 vm from either guac or laptop in LAN Network 192.168.1.x .... so, this is still open when i read correctly that the kernel errors appear when using the same network as dockers do ?
  13. may some questions before trying the beta i currently use 1 cache drive (nvme 1tb) xfs formatted 1 UD device where my VM's sit, nvme 500gb xfs formatted 4 drives in array all xfs formatted when i now upgrade, does this collide with the current setup ? do cache drives have to be btrfs or am i still good to go ? reason why xfs, my cache drive was btrfs before and crashed 2x completely with complete data loss ... since xfs im good. when i read correctly this should still be fine as single drive pool. Next would be, change my current UD drive to cache2 (sep single drive cache pool for my VM's), also possible as its a seperated pool with single formatted xfs drive ? or rather keep it UD VM's need network change so the kernel error would be gone, when changing this (webui) also probably all manual changes still gone or are they persistent now ? (sample cpu pinning) VFiO PCIe config should be obsolete then when using the new setting feature, uninstall plugin before update or doesnt matter ? last for now, new kernel fixes nested virt ? ... win 10 wsl2 subsystem activation lets VM crash badly, any changes done there in webui or still same procedure to activate ? for some notes thanks ahead
  14. @TrondHjertager thats all, custom network is only when its on the same server locally so traffic is local only. and u also should be able to connect via plex.yourdomain.com which would be direct connected then.
  15. base xml from g2g update, and a xteve.xml after the update
  16. it shouldnt affect the current data cause xteve caches its data (u see the cache files), so the current epg state shouldnt be affected as new programs aint added (as failed) but wiping existing data is not the way it should work. i cant sadly force it cause my lineups seems pretty stable and i never experienced a complete lineup loss due 1 update, only if it fails for like 14 days (which is my setup). definately something to at least point at ... and may next time u encounter this issue, samples would be nice to look at. But u would have to build a routing into your small script lines so it proceeds the fail.
  17. ok, wonders me due when you use xteve, xteve will keep its data (cached) and wont empty the current data ... if it does empty the data, may open a topic at xteve github cause that would be a bug
  18. then rather may contact SD cause there is not much todo here when u setup all correctly and its been working, or may look if the channels u miss are in a different lineup which may work. sorry
  19. and you also followed the correct way to readd ... so, simple update would be goto config, rename cronjob.sh, force update (new cronjob.sh is created), fill out the fields again to match your settings (old cronjob.sh may as source), start a new g2g config as stated, guide2go -configure /guide2go/whatevername.yaml if you then want to force new xml data, start cronjob.sh manually like stated /config/cronjob.sh all done ... also, all mappings etc should be still there when u took the same lineups
  20. can also confirm locale is working ...
  21. @Rick Gillyon hi, may take a look how the readme says ... and some upper posts here guide2go -configure z.yaml <<-- wrong guide2go -configure /guide2go/z.yaml <<-- correct, like stated in the readme
  22. in case u have NC on a custom network and mariadb on host, did u enable the feature in docker settings so they can reach each other ? and also allowed ip range for nextcloud user ...
  23. @steve1977 this was pointed to a small solution. if it works with express i cant tell, u can try it if you want or may try the more complex but also more variable one from binhex privoxy-vpn which i also used before. the procedure to run "client" containers is described above, doesnt matter which vpn docker u use, should be always the same. i dont have any exclusion url management buildin (not my usecase).
  24. thanks for the cron hint ok, after some tests and some more digging it is a cron thing to get the writeout (like i remembered), but here its a bit more todo in this docker, sample, the command to force would be php occ documentserver:flush which could also be used in unraids cron, but may cause issues while file in use ... not recommended. this should be triggered by the system cron job, but as we see the system cron is not working in this docker ... so i played around with the hints given here and use unraid to call cron now, prerequest is, user has to be logged out that the system cron will trigger the writeout (flush command is done system cron), as we know most likely just the browser tab gets closed ... by default the user will stay logged in as u can test by simply reopen NC, there is no login prompt, just simply on your main page already logged in. so im testing now with the following settings to force a inactivity logout 'remember_login_cookie_lifetime' => 1296000, 'session_lifetime' => 3600, 'session_keepalive' => false, this should kick out after 1 hour when nothing happened, then the next cron should do the writeout ... so in sum logout manually -> system cron should do its writeout job logout forced by settings --> system cron should do its writeout job (still testing) no logout, run the flush command ... will force a writeout (keep in mind, may data loss while active in file) other way would be to add a second cron (sample, run once a day during nite) to flush instead the force logout to be safe and btw, the cron command like this docker exec nextcloud php -f /var/www/html/cron.php without the -u www..... also works here and i also get reported in webui that cron was running, i had issues when using the one from the thread. i ll report back if the forced logouts will work as expected. ### EDIT Looking good, also writeout is done when just closing the tab here now
×
×
  • Create New...