Autchirion

Members
  • Posts

    87
  • Joined

  • Last visited

Everything posted by Autchirion

  1. you just edit the template and add these two as environment variables, be aware to edit the values in <> to your needs. Also, the phone app does not support oauth, so your uses still need a password if they want to use the app. SOCIAL_PROVIDERS: allauth.socialaccount.providers.openid_connect SOCIALACCOUNT_PROVIDERS: { "openid_connect": { "SERVERS": [ { "id": "authentik", "name": "Authentik", "server_url": "https://<auth.domain.tld>/application/o/<appname>/.well-known/openid-configuration", "token_auth_method": "client_secret_basic", "APP": { "client_id": "<your client id>", "secret": "<your secret>" } } ] } }
  2. I just updated to the version 11.3.2 which was released 7h ago, since then mariadb is broken. I had to roll back to a backup from tonight and go back to version mariadb:11.2.3. Anyone else facing this issue? I wasn't able to connect any more, every time phpmyadmin/own scripts/nextcloud tried to connect it would resolve in an errormessage posted in the log.
  3. I'm starting to get rid of login screens and using authentik to authenticate at basically every service I'm hosting myself. Add support for any other authentication method in order to be able to use authentik or other identity providers to log into Unraid. Examples: *arrs are using basic auth jellyseer supports LDAP nextcloud supports oauth2 so basically everything on my network supports this by now so the only service I have to login at the moment is unraid. With the future of passkeys and other authentication Unraid should support more possibilities to log in. Of course HTTP Basic Auth might be the best option, since it allows users to still use the login screen and this is way less effort than implementing oauth2 into the root user.
  4. Not really, I deleted my docker.img and recreated it. While doing this I changed all CPU assignments, so right now I don’t need to fix it any more.
  5. It is made to handle bursts, but sometimes these bursts are to big in size to be handled by the cache. I was hoping to find a solution without throwing money (aka bigger cache, not reusing my first SSD) at it.
  6. Yeah, my Problem is, cache is only 120GB, if I set it to e.g. move only at 80% I'd need to run it every 3.4 Minutes (assuming 1Gbit/s upload rate which is my home network speed) to make sure it never fills up.
  7. yes, I expected this, damn it. 🙂
  8. Somehow it seems to me, that moving when the cache is full isn't working. I've got a share with the "Move All from Cache-Yes shares when disk is above a certain percentage" yes. Two caches (Naming Cache and Datacache), where for this share the Datacache (~120GB) is used I'm downloading ~100 files with ~2GB each, files are beeing pre generated (~1KB) and then after this filled with data. -> Mover isn't being activated when Datacache is 80% full. I'm not sure if I set up something wrong or if this usecase isn't covered by mover tuning. [EDIT] since it sounded similar, cache.cfg and datacache.cfg exist, I'm not sure if it is case sensitive, because the filename is lowercase, but the first letter of the Cache Name in the UI is uppercase. Mover Tuning - Share settings: Mover Tuning Settings:
  9. Thank you for your reply! Sorry, native languate isn't english, so sometimes I still think I made something clear, but obivously didn't. You are right, this happens if the users start the transfer at the same time. Of course this won't work and I should have thought of this, my bad. I will need to increas the Min Free Space zu users * max filesize obviously to cover this usecase as well. Is there any way to invoke the mover when the min free space is hit? This way I could move all the finished files while the new files are written, so I don't have to increase the min free space to a massive size. I checked mover tuning, but I didn't see an option for this. [edit] learned that move tuning allowed exactly what I want, start moving (per share setting) as soon the drive is about to be full.
  10. Hey Guys, I'm running a nextcloud server where I sometimes get the issue that the cache is running full. This occurs, whenever we are uploading Video Files into one folder. Each file is smaller than the cache, but together they are way bigger and the upload happens before the mover is scheduled. So I expected as soon the Minimum free Space gets hit, the next file will automatically be written to the array. However, this does not happen, at least not if everything is in one folder or e.g. multiple folders are bigger than the cache together (so each user would upload his files into his own folder, but they start uploading before the cache is "full") . This is causing me quite a headdache, and I'm not sure what to do. So, can't I tell unraid to just move the files to the array from now on, even if it would mean to split up the folders over cache and array? In generall, this is possible, if the mover took place, the folder is on the array and now we are adding some more files to this folder. Alternatively, can't we invoke the mover for this specific cache as soon as free space < min free space? Are there any other options to prevent file loss for huge writes to my cache? I mean, the files individually are relatively small, we are recording using 4k video max, so it's not like that one file can exceed the 120GB of cache drive I'm using. Thank you in advance, Autchi
  11. Hey Guys, I'm using Unraid since quite a while now also the feature CPU pinning. Due to a change of my containers lately I wanted to update my CPU Pinning. However, as soon as I change the CPU pinning for at least one container and hit "apply" it will only show "Please wait..." with a spinning wheel in the UI which will never end. I tried it with Containers on and containers off and I checked the log for error messages while doing so. No error message commes up, it just seems like it's doing nothing. Since I'm not a super expert in debugging stuff like this I was hoping someone here could help me figure out what I need to do. I'm running an i5-12400. Have a great day Autchi
  12. Hey Guys, I'm posting this here, since it feels like an issue of the engine, rather than the container. I've got 2 nextcloud containers, one is nextcloud_prod, the other is nextcloud_test. This was working fine until recently, I changed some stuff, but reverted it back and now it's not working any more. When I'm trying to start the container I get the message The command failed: docker run -d --name='nextcloud_test' --net='swag' --cpuset-cpus='2,4,6,8,10,3,5,7,9,11' -e TZ="Europe/Berlin" -e HOST_OS="Unraid" -e HOST_HOSTNAME="server" -e HOST_CONTAINERNAME="nextcloud_test" -e 'PHP_MEMORY_LIMIT'='8192M' -l net.unraid.docker.managed=dockerman -l net.unraid.docker.webui='http://nextcloud.happyheppos.eu' -l net.unraid.docker.icon='https://decatec.de/wp-content/uploads/2017/08/nextcloud_logo.png' -p '3080:80/tcp' -v '/mnt/user/appdata/nextcloud_test/nextcloud':'/var/www/html':'rw' -v '/mnt/user/appdata/nextcloud_test/apps':'/var/www/html/custom_apps':'rw' -v '/mnt/user/appdata/nextcloud_test/config':'/var/www/html/config':'rw' -v '/mnt/user/nextcloud_data_prod':'/var/www/html/data':'rw' --user 99:100 --sysctl net.ipv4.ip_unprivileged_port_start=0 'nextcloud:production' && docker exec -u 0 nextcloud_test /bin/sh -c 'echo "umask 000" >> /etc/apache2/envvars' && docker exec -u 0 nextcloud_test /bin/bash -c "apt update && apt install -y libmagickcore-6.q16-6-extra ffmpeg imagemagick ghostscript" cab6b69a0570c5ce8417739b595a2a175f08e98ee01fca3a0fe37bceca09c7ad The command failed. Interestingly, this is basically only a copy/paste of the nextcloud_prod container which is working perfectly fine, the only thing I adapted is the container name for the Post arguments. Any ideas what might be going wrong here? EDIT: I tried removing the && docker exec -u 0 nextcloud_test /bin/bash -c "apt update && apt install -y libmagickcore-6.q16-6-extra ffmpeg imagemagick ghostscript" from the post arguments and out of the sudden it's starting just fine and just calling the command manually (after the container is started without it) is also working like a charm. Thank you in advance, Autchi
  13. Can this plugin be extended to read activities as well (and probably limit it to only some disks of the array)? I've got some reads that prevent disks from spinning down, but I have no Idea where they are comming from.
  14. Hey Guys, something is preventing my disc to spin down due to reads/writes every 30 seconds (created a video of it and double checked). The picture shows a read/write session directly after clearing all stats, so everything is only due to the 30 second r/w session. The picture shows also iotop as a reference, which unfortunately doesn't show anything. What I looked into: -No Docker Container/VM is active, they are all stoped. -I used the open Files Plugin and this shows no open files on /mnt/diskX or /mnt/user. -iotop doesn't show any process using it, however shows disk write. Interestingly, it does not write the same to cache and cache2, eventhough they are running in a mirrored setup. Also, it seems like there are only writes on the ZFS drives, but the XFS drives are only haveing reads. Does anyone have an idea what might go wrong here? Doesn't seem like a big issue, but I don't like the extra power consumed due to the drives not spinning down and I'm a bit scared that something might be wrong. Have a good day Autchi
  15. Thank you, since you confirm my assumption, I'll go with this solution. 🙂
  16. Hello Folks, I converted my cache and one drive to zfs. One of the main reasons was, that I want to use zfs snapshots to reduce downtime of my containers. But here comes nextcloud, I set my nextcloud to store data on the cache first and then move it to the disk at night. So, I'd need to create a backup of the cache and the zfs disk. let's assume ran the following command: zfs snapshot cache/zfs_test@test & zfs snapshot disk2/zfs_test@test Can I safely access /mnt/user/zfs_test/.zfs/snapshot/test and get the data from out of there and get all elements from both snapshots? I'd say yes, but since it's critical data I want to make sure. Also, if someone has a better suggestion on how to handle this type of scenario, let me know! Thank you for your help, Autchi
  17. same (or simmilar) issue here if you are using the app (which I assume because of error 413). I can upload massive files (>20GB) via my chrome webbrowser on my PC but not via App. Do I assume correctly you are refering to the App? Also, did you try to upload via browser? And finaly, don't forget if you are using cloudflare and use the proxy settings to hide your IP, max filesize is 100MB you will have to activate uploading large files as chungs in the app. You can do this when going into the settings of the app and there should be an option like "advance" (sorry, not using the english translation of the app) you can set the chunk sitze for the file in the app. This issue seems new to me, because I'm quite sure I downloaded massive files from the app before. So, what I'm wondering is, did somethin go wrong in the app since a last update or so. Update: -I completely deleted my account from my phone -logged in again (didn't use an app login, but normal login) -set the chung size to 90MB -activated the auto upload and voila, was able to upload my 2GB video File
  18. wie hast du das Token hinzugefügt? Das ist mir etwas unklar, ansonnsten sieht es relativ einfach aus 🙂
  19. danke für die Info, habe nochmal gelesen und festgestellt, dass ich was falsch verstanden hatte! Netzwerk läuft jetzt. VNC läuft auch wieder, interessanterweiße habe ich nichts verändert, keine Ahnung was das für ein Schluckauf war.
  20. Ich habe es gerade mit Windows 10 und 11 versucht, klappt leider beides nicht, scheitert am Netzwerk. Bei Windows 10 konnte ich das überspringen und wollte dann den Treiber über den Gerätemanager nachinstallieren. Es wird mir der Ethernet Controller angezeigt, Treiber aktualisieren, CD Laufwerk etc...er findet den Treiber und hängt ewig bei Treiber installieren. Wenn er dann mal irgendwann fertig kommt diese Meldung: Mehfrach neu installiert, Machine Q35-7.1 mit virtio 0.1.229-1. Zusätzliches Problem, mit dem VNC tool von Unraid komme ich nicht drauf (failed to connect to server), mit tightVNC auf einem anderen PC klappt es dann wunderbar zu verbinden. Irgend welche Ideen? Ich komme nicht weiter, hatte Win 10 vorher drauf, da hat es perfekt funktioniert, jetzt alles neu gemacht und plötzlich geht es nicht mehr.
  21. uh, that's not happy news, since with the version 3.4.4 I'm getting an error with Borg Backup. Now, it seems like borg backup requires the file libffi.so.7 but libffi 3.4.4 doesn't have this file any more. Any known workarounds (except installing it manually via go file, which I don't like)? Traceback (most recent call last): File "/usr/lib64/python3.9/site-packages/borg/archiver.py", line 41, in <module> from .archive import Archive, ArchiveChecker, ArchiveRecreater, Statistics, is_special File "/usr/lib64/python3.9/site-packages/borg/archive.py", line 20, in <module> from . import xattr File "/usr/lib64/python3.9/site-packages/borg/xattr.py", line 9, in <module> from ctypes import CDLL, create_string_buffer, c_ssize_t, c_size_t, c_char_p, c_int, c_uint32, get_errno File "/usr/lib64/python3.9/ctypes/__init__.py", line 8, in <module> from _ctypes import Union, Structure, Array ImportError: libffi.so.7: cannot open shared object file: No such file or directory
  22. Hello all, I need libffi to be able to run borg backup. For this I did these two steps, after running the last command I was able to run borg backup. # Download libffi from slackware.uk to /boot/extra: wget -P /boot/extra/ https://slackware.uk/slackware/slackware64-15.0/slackware64/l/libffi-3.3-x86_64-3.txz # Install the package: installpkg /boot/extra/libffi-3.3-x86_64-3.txz What I understand, if a package ist stored in /boot/extra/ it will be automatically installed. However, as soon as I reboot it won't be auto installed. My other packets are beeing installed, except for this one. Any clues why that is? Thank you in advance Autchi
  23. I already tried the "vpn tunneled access for Docker" option, same behavior. The other options all require "Peer allowed IPs:" which I don't exactly know what is supposed to be put in. I delete the wq-quick.log and then activated the tunnel set for Docker (btw. had to reboot the server first, before that wg-quick.log wasn't beeing created): server:~# cat /var/log/wg-quick.log wg-quick up wg0 (autostart) [#] ip link add wg0 type wireguard [#] wg setconf wg0 /dev/fd/63 [#] ip -4 address add 192.168.2.11 dev wg0 [#] ip link set mtu 1420 up dev wg0 [#] wg set wg0 fwmark 51820 [#] ip -4 route add 0.0.0.0/0 dev wg0 table 51820 [#] ip -4 rule add not fwmark 51820 table 51820 [#] ip -4 rule add table main suppress_prefixlength 0 [#] sysctl -q net.ipv4.conf.all.src_valid_mark=1 [#] iptables-restore -n [#] logger -t wireguard 'Tunnel WireGuard-wg0 started' [#] ip -4 route flush table 200 [#] ip -4 route add default via 192.168.2.11 dev wg0 table 200 [#] ip -4 route add 192.168.1.0/24 via 192.168.1.1 dev table 200 Error: either "to" is duplicate, or "200" is a garbage. [#] iptables-restore -n [#] ip -4 rule delete table 51820 [#] ip -4 rule delete table main suppress_prefixlength 0 [#] ip link delete dev wg0
  24. Hello Guys, I just got my second server and set it up. I want to use my old server as an off-site backup (off site) solution in case of a disaster and my new server will be at home. Both Servers are running unraid 6.11.1. I've got wireguard running on my router to handle incomming VPN connections to my home. For all my other devices I'm using a simmilar config structure, however if I import this config and select "VPN tunneld access for System" it doesn't seem to work. Workflow (what I do on the server=> server response): Import the config (see code) => config shows up in interface select "VPN tunneld access for System" click Apply => config is stored reboot (just in case) set switch from inactive to inactive => switch directly jumps back to inactive and syslog output: "Oct 19 13:28:14 servername wireguard: Tunnel WireGuard-wg0 started" (no more output) check on my router => there was an initial handshake, but the connection got closed immediately after that Config: [Interface] PrivateKey = OffsitePrivateKey Address = 192.168.2.11/32 DNS = 8.8.8.8 [Peer] PublicKey = HomePublicKey AllowedIPs = 0.0.0.0/0 Endpoint = domain:port PersistentKeepalive = 25 I don't know what I'm doing wrong here, since this is working for all other devices I'm using. If anyone can point me into the right direction I would be greatfull. Thank you in advance Autchi