• Posts

  • Joined

  • Last visited

Everything posted by exico

  1. I have a weird "bug" since i upgraded to RC2 from RC1 My network connection on the server goes "down" every now and again. What happens is, for example, I copy a file from unraid to my machine at 100 something mb/s then at mid transfer goes to 0 byte/s and then restart after a while. This happen on Win10, 11 over wired or wireless. I tried in a VM on my Unraid server and the transfer has no issues. Web interface works fine even during the 0byte/s interval What i tried: - Changing MTU back to 1500 on UNRAID, switch and machines - changed nothing - Changing network card on the client - nothing - Upgrading from 10 to 11 on a client - nothing - Update driver on network card on client - nothing - Restarting both clients and unraid - nothing My config: Supermicro X9SRL-F with a Xeon E5-2670 and 64gb ram 2x intel 82574L gigabit onboard with a LACP bond that worked fine up to RC1 8 disks, various sizes with 1 parity 1 cache nvme ssd I will try to disable the bond next but i need it and it worked fine up to RC1 I attached diagnostics unraidsrv-diagnostics-20211113-1517.zip EDIT: nope, its not the bond, problem occurs even with just one cable connected and bond removed EDIT2: i tried with a broadcom nextreme II that i had laying around and i have the same problem. I also tried safe mode to no avail. I guess ill just revert to RC1 for the moment EDIT3: i can confirm that on RC1 i do not have the problem and everything works fine except my win11 vm that I created on RC2 but i can understand that because the TPM template was added in that version
  2. Yeah. mine too. I was wondering why one of the services just died and i checked the db connection... Not sure why it worked but i changed the repo from linuxserver/mariadb to linuxserver/mariadb:alpine-version-10.5.12-r0 (an older version) and then changed it to linuxserver/mariadb:latest and works. Not ideal i know but works. Logs for the error points to a fail in innodb 210825 18:56:18 mysqld_safe Starting mariadbd daemon with databases from /config/databases Warning: World-writable config file '/etc/my.cnf.d/custom.cnf' is ignored 2021-08-25 18:56:18 0 [Note] /usr/bin/mariadbd (mysqld 10.5.12-MariaDB) starting as process 4703 ... 2021-08-25 18:56:18 0 [Note] InnoDB: Uses event mutexes 2021-08-25 18:56:18 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 2021-08-25 18:56:18 0 [Note] InnoDB: Number of pools: 1 2021-08-25 18:56:18 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 2021-08-25 18:56:18 0 [Note] mariadbd: O_TMPFILE is not supported on /var/tmp (disabling future attempts) 2021-08-25 18:56:18 0 [Note] InnoDB: Using Linux native AIO 2021-08-25 18:56:18 0 [Note] InnoDB: Initializing buffer pool, total size = 134217728, chunk size = 134217728 2021-08-25 18:56:18 0 [Note] InnoDB: Completed initialization of buffer pool 2021-08-25 18:56:18 0 [ERROR] InnoDB: Upgrade after a crash is not supported. The redo log was created with MariaDB 10.4.21. 2021-08-25 18:56:18 0 [ERROR] InnoDB: Plugin initialization aborted with error Generic error 2021-08-25 18:56:18 0 [Note] InnoDB: Starting shutdown... 2021-08-25 18:56:18 0 [ERROR] Plugin 'InnoDB' init function returned error. 2021-08-25 18:56:18 0 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 2021-08-25 18:56:18 0 [Note] Plugin 'FEEDBACK' is disabled. 2021-08-25 18:56:18 0 [ERROR] Unknown/unsupported storage engine: InnoDB 2021-08-25 18:56:18 0 [ERROR] Aborting 210825 18:56:18 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
  3. Fvtt docker is on port 30000. Is a web application based on nodejs for D&D and more
  4. Im having a problem where i can access fine fvtt trough a domain but when i paste the key and get to the configuration to start a world the "play" button goes grey and do nothing. If i access directly trough unraid:port everything works. Im using this proxy manager: As options i enable Cache assets, Websocket support, Force SSL (letsencrypt) and i added in the custom configuration this bit: add_header Referrer-Policy "same-origin" always; add_header Access-Control-Allow-Origin https://domain.redacted always; client_max_body_size 1024M; fastcgi_buffers 64 4K; set $upstream_app FoundryVTT; proxy_max_temp_file_size 1024m; The complete automated config file looks like this: # ------------------------------------------------------------ # domain.redacted # ------------------------------------------------------------ server { set $forward_scheme http; set $server ""; set $port 30000; listen 80; listen [::]:80; listen 443 ssl http2; listen [::]:443; server_name domain.redacted; # Let's Encrypt SSL include conf.d/include/letsencrypt-acme-challenge.conf; include conf.d/include/ssl-ciphers.conf; ssl_certificate /etc/letsencrypt/live/npm-2/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/npm-2/privkey.pem; # Force SSL include conf.d/include/force-ssl.conf; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $http_connection; proxy_http_version 1.1; access_log /data/logs/proxy-host-2_access.log proxy; error_log /data/logs/proxy-host-2_error.log warn; add_header Referrer-Policy "same-origin" always; add_header Access-Control-Allow-Origin https://domain.redacted always; client_max_body_size 1024M; fastcgi_buffers 64 4K; set $upstream_app FoundryVTT; proxy_max_temp_file_size 1024m; location / { proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $http_connection; proxy_http_version 1.1; # Proxy! include conf.d/include/proxy.conf; } # Custom include /data/nginx/custom/server_proxy[.]conf; } proxy.conf: add_header X-Served-By $host; proxy_set_header Host $host; proxy_set_header X-Forwarded-Scheme $scheme; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Real-IP $remote_addr; proxy_pass $forward_scheme://$server:$port; options.json { "port": 30000, "upnp": false, "fullscreen": false, "hostname": "domain.redacted", "routePrefix": null, "sslCert": null, "sslKey": null, "awsConfig": null, "dataPath": "/foundry/data", "proxySSL": true, "proxyPort": 443, "minifyStaticFiles": true, "updateChannel": "release", "language": "it.FoundryVTT - it-IT", "world": null } On this proxy i have all sort of hosts: plex, wikijs (that runs on nodejs and has no problems of sorts), my firm site, a gitlab etc and foundryvtt is the only one I am having problems with. What Im doing wrong?
  5. Disabled "Host access to custom networks" and now i can access everything except one docker but i will figure it later
  6. I did not, just tried and nothing changed. Just an hypotesis, can the setting "Host access to custom networks" set to enable in the docker settings be a problem? I will have to wait to stop dockers to test this atm cause there is a task running
  7. Im using
  8. Yeah, my config includes allowed ips: [Interface] PrivateKey = REDACTED Address = [Peer] PublicKey = REDACTED PresharedKey = REDACTED AllowedIPs =, Endpoint = REDACTED:51820 NAT is on Yes as per screenshot What I'm trying to access is something. I tried the server ipmi, the web interface of the switch, pfsense interface on the router. Nothing pops up, just the unraid works and shows up Everything worked fine before...
  9. Well it was already on Remote access to LAN I can connect but i can access only the unraid server and nothing on the lan
  10. Here's the output: [#] ip link add wg0 type wireguard [#] wg setconf wg0 /dev/fd/63 [#] ip -4 address add dev wg0 [#] ip link set mtu 1420 up dev wg0 [#] ip -4 route add dev wg0 [#] ip -4 route add dev wg0 RTNETLINK answers: File exists [#] ip link delete dev wg0 Guess that the error is File exists what does that mean?
  11. Im having a strange problem and i cannot figure out whats wrong I edited a peer recently and from that point forward i cannot activate the tunnel. I did try to save it, remove it and re-import but whenever i add in "peer allowed IPs" the LAN network with x.x.x.x/24 the tunnel wont activate. Tunnel: Local tunnel network pool: Local tunnel address: Endpoint: [redacted, static ip]:51820 Local server uses NAT: No (i tried with Yes, nothing changes) First Peer: Peer name: something Peer type of access: Remote access to LAN Peer tunnel address: Peer allowed IPs: Whenever i put ( is the lan) Peer allowed IPs:, The tunnel wont stay On, if i press on the button it moves but if i F5 the page or go to another and come back is OFF. Syslog just says that the tunnel turned on and off There is a more useful log for wireguard? There is nothing in /var/log On this machine i have already a tunnel server to server that works flawlessy
  12. Im an idiot 😆 Ping replied, got an handshake. I forgot that wireguard does not initiate the connection until there is a request. Gonna change the keys, thanks
  13. Hello! I've been trying to setup a server to server access that from what i understand is like lan to lan but without the routing. I followed the steps in the first post but i cannot get an handshake. Heres what i configured (i will change keys later when i get this working) I have static IPs in both locations that i redacted for obvious reasons. I forwarded port 51830 in both pfsenses. For context, in both locations i have already a "remote access to LAN" setup with various clients and works fine (on a different port/tunnel obviously) I cannot get a handshake, do you have any idea of what im doing wrong?
  14. For those who had this problem: UniFi Controller startup failed We do not support upgrading from 6.2.23. Checkings the logs in /logs/server.log: [2021-05-13T12:09:08,198] <localhost-startStop-1> INFO db - DB version (6.2.23) different from Runtime Version(6.1.71), migrating... [2021-05-13T12:09:08,205] <localhost-startStop-1> ERROR db - We do not support upgrading from 6.2.23. I searched around in the data folder and i found that the "DB version" is just a text file. If you change the version in the text file it just skip the check and migration. You can find the file in /data/db/version Just edit it with notepad++ or whatever editor that is not Windows notepad and change it from 6.2.23 to 6.1.71 Save and start the docker. I dont know if this can cause problems in the long term but at least you can start back up the controller
  15. You are right, it's the stupid vmware player vm that is broken. I tried with a vm in Unraid and it loads fine, i did not try with a physical machine yet because i use my desktop as work pc and my notebook has no ethernet. I wonder why vmware player doesnt work
  16. I'm struggling to boot live media from the netboot docker and i have no idea what I'm doing wrong. - I set up the live endpoint correctly (i had to reroute port 8080 to 8069 cause it was already used in another container but it works. i can navigate it fine) - I downloaded the assets (in my case i tried popos 20.04 and ubuntu mate 20.10 and i can see them in etc ) - I set up a vmware vm that boot into pxe and i get this error (see attached image). I get the error whenever i boot popos or ubuntu mate live I tried Ultimate boot cd and works ( i had to download it manually and change the name cause there is a version mismatch in netboot config files) I tried ubuntu network install 20.10 and works, gparted works, clonezilla ubuntu stable works VM has plenty of ram (8G) and disk (64G) and is in bridge mode so it has its own ip and mac DHCP is provided by my pfsense and is configured correctly
  17. I dont think its a general support issue... i explain: I had the time to restore the usb backup i made before installing the beta 22. Now i can delete the docker and with the beta i couldnt and now the scrub works instead of stopping with 0 seconds and aborted as status.
  18. Not sure if anyone posted this already but i installed the beta a couple days ago and im getting a strange behavior with dockers. First time half the dockers running just didnt work anymore out of the blue giving me errors of read-only file system. Restarted the server and everything worked. Recently i tried to update Handbrake and in the middle of the update gave up errors of read only file system. The docker now is present in the list with a questionmark and a "not available" on the version column and i cannot remove/reinstall it. It gives me a generic server error. I tried even to remove it from the terminal: root@UNRAIDSRV:~# docker rmi cf94ba0a9bd0 Error response from daemon: open /var/lib/docker/image/btrfs/.tmp-repositories.json462281471: read-only file system Or reinstall it to no avail Not sure if this could be the beta build or something else. One more thing: if i go in Settings -> Dockers and i do a Scrub it just doesnt start. Duration 0, status aborted with or without correct file system errors
  19. Cannot really remove it, its integrated in the motherboard but i have a 9211 in my personal server and it boots fine. I dont think is the LSI controller the problem, from what i can see it loads the drivers fine and stops a lot later. If was the LSI the problem it would give an error when loading the drivers i think.
  20. Many things, its a Supermicro "Superserver 2013S-C0R" configured with the Epyc 7302P, H11SSL-C rev 2 and 2x 16GB DDR4 ECC RAM. Bios is latest available (2.0b) For the rest i can see clearly: A M.2 Add On card "AOC-SLG3-2M2" Link A Broadcom 3008 SAS3 Controller integrated into the motherboard that is connected to, what i think is, a dummy backplane with no processing, just passtrough since has just 8 bays Classic Supermicro IPMI For disks: 2x WD Red 4TB, 1 x WD Red 1TB and 2x NVME Samsung 970 Pro 512GB placed on the add-on card
  21. Having the same issue on a Supermicro H11SSL-C v 2.0 with a Epyc 7302P. Tried to boot in safe, uefi and non uefi etc but nothing, hangs in the same spot everytime EDIT: Backed up the config folder, wiped the usb, flashed 6.8.0, restored config and now boots fine Guess its a problem with Epyc or TR with this new version, i wonder what could be EDIT2: i attached the diagnostics from v 6.8.0 zunraid-diagnostics-20200118-2312.zip
  22. It would be nice to add support for Apple updates in Steamcache docker. I managed to add it via "hacking" the existing configurations, if you plan to add it to the docker this is what i did to make it work: bootstrap.sh mod: Add after if [ -z "$WINDOWSCACHE_IP" ] && ! [ "$DISABLE_WINDOWS" = "true" ]; then WINDOWSCACHE_IP=$LANCACHE_IP fi This piece: if [ -z "$APPLECACHE_IP" ] && ! [ "$DISABLE_APPLE" = "true" ]; then APPLECACHE_IP=$LANCACHE_IP fi Add this after ## windows section ## apple if ! [ -z "$WINDOWSCACHE_IP" ]; then echo "Enabling cache for apple" cp /etc/bind/cache/apple/template.db.apple /etc/bind/cache/apple/db.apple sed -i -e "s%{{ applecache_ip }}%$APPLECACHE_IP%g" /etc/bind/cache/apple/db.apple sed -i -e "s%#ENABLE_APPLE#%%g" /etc/bind/cache.conf fi Add this in /etc/bind/cache.conf at the end of the file ## APPLE zone "swcdn.apple.com" in { type master; file "/etc/bind/cache/apple/db.apple"; }; Add a folder named "apple" in /etc/bind/cache Create 2 files in this folder: db.apple (the ip would change automatically on docker boot, thats my ip for the cache docker) $TTL 600 @ IN SOA ns1 dns.steamcache.net. ( 2015040800 604800 600 600 600 ) @ IN NS ns1 ns1 IN A @ IN A template.db.apple $TTL 600 @ IN SOA ns1 dns.steamcache.net. ( 2015040800 604800 600 600 600 ) @ IN NS ns1 ns1 IN A {{ applecache_ip }} @ IN A {{ applecache_ip }} to enable the apple cache i just added "-e APPLECACHE_IP=youripnumber" in advanced view in docker edit/install in the extra parameters The complete modding requires mapping folders and copy files with correct permissions because of the docker resetting files when updating. If you are interested in testing it i can write it down but it would be faster if it is implemented in the docker directly instead. Thats my 2 cents. I just needed to add apple update caching on top of windows updates in a mixed evironment and i didnt want to create another docker just for that. I hope that this will be usefull and maybe be implemented.
  23. Hello again, I explain the situation: I have a share with i guess five or six Macs that access it but one or more of them "lock" the files as another owner in the share and not as nobody:users. The result is that the other Macs cannot read/write/whatever on those files and i have to reset the permissions manually every time with chown. Every computer that is connected to that share has the same unraid user and its only AFP. In that share there is a folder mounted in a VM but i checked and the VM write as nobody:users. Also, is there any mean to ban certain file or folder with a specified string in their names with AFP? Thanks ~
  24. Hello, Im pretty new to KVM and i would ask if someone got the same error or can help me troubleshooting it. I have two unraid boxes with the same MB ( Gigabyte H97N WIFI ), both with 8 GB of ram, the only device that change between them is the CPU (one with a Pentium G3250 and one with a i5 4460) Both have a virtual machine with Debian but only one of them (the one on the i5) at the VM boot ask everytime to press CTRL D to continue or use root password for a shell (yeah i tried to systemctl default or the other commands with no success). In the VMs both has teamviewer but the one on the Pentium has only Crashplan and some Cron schedules and the one on the i5 has Dropbox and Cron schedules. Iommu disabled on Pentium and enabled on i5. I should say that happened already on the same machine with Xubuntu. That CTRL D at startup of the VM is very annoying because is in a remote location and if i need to restart the VM or the unraid box i lose all my access to the machine trough teamviewer. Thanks
  25. No more warnings, thanks! Anyway there is any possibility to see what the client is doing? Some infos like in the desktop version.