Autchirion

Members
  • Posts

    87
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Autchirion's Achievements

Rookie

Rookie (2/14)

8

Reputation

  1. you just edit the template and add these two as environment variables, be aware to edit the values in <> to your needs. Also, the phone app does not support oauth, so your uses still need a password if they want to use the app. SOCIAL_PROVIDERS: allauth.socialaccount.providers.openid_connect SOCIALACCOUNT_PROVIDERS: { "openid_connect": { "SERVERS": [ { "id": "authentik", "name": "Authentik", "server_url": "https://<auth.domain.tld>/application/o/<appname>/.well-known/openid-configuration", "token_auth_method": "client_secret_basic", "APP": { "client_id": "<your client id>", "secret": "<your secret>" } } ] } }
  2. I just updated to the version 11.3.2 which was released 7h ago, since then mariadb is broken. I had to roll back to a backup from tonight and go back to version mariadb:11.2.3. Anyone else facing this issue? I wasn't able to connect any more, every time phpmyadmin/own scripts/nextcloud tried to connect it would resolve in an errormessage posted in the log.
  3. I'm starting to get rid of login screens and using authentik to authenticate at basically every service I'm hosting myself. Add support for any other authentication method in order to be able to use authentik or other identity providers to log into Unraid. Examples: *arrs are using basic auth jellyseer supports LDAP nextcloud supports oauth2 so basically everything on my network supports this by now so the only service I have to login at the moment is unraid. With the future of passkeys and other authentication Unraid should support more possibilities to log in. Of course HTTP Basic Auth might be the best option, since it allows users to still use the login screen and this is way less effort than implementing oauth2 into the root user.
  4. Not really, I deleted my docker.img and recreated it. While doing this I changed all CPU assignments, so right now I don’t need to fix it any more.
  5. It is made to handle bursts, but sometimes these bursts are to big in size to be handled by the cache. I was hoping to find a solution without throwing money (aka bigger cache, not reusing my first SSD) at it.
  6. Yeah, my Problem is, cache is only 120GB, if I set it to e.g. move only at 80% I'd need to run it every 3.4 Minutes (assuming 1Gbit/s upload rate which is my home network speed) to make sure it never fills up.
  7. yes, I expected this, damn it. 🙂
  8. Somehow it seems to me, that moving when the cache is full isn't working. I've got a share with the "Move All from Cache-Yes shares when disk is above a certain percentage" yes. Two caches (Naming Cache and Datacache), where for this share the Datacache (~120GB) is used I'm downloading ~100 files with ~2GB each, files are beeing pre generated (~1KB) and then after this filled with data. -> Mover isn't being activated when Datacache is 80% full. I'm not sure if I set up something wrong or if this usecase isn't covered by mover tuning. [EDIT] since it sounded similar, cache.cfg and datacache.cfg exist, I'm not sure if it is case sensitive, because the filename is lowercase, but the first letter of the Cache Name in the UI is uppercase. Mover Tuning - Share settings: Mover Tuning Settings:
  9. Thank you for your reply! Sorry, native languate isn't english, so sometimes I still think I made something clear, but obivously didn't. You are right, this happens if the users start the transfer at the same time. Of course this won't work and I should have thought of this, my bad. I will need to increas the Min Free Space zu users * max filesize obviously to cover this usecase as well. Is there any way to invoke the mover when the min free space is hit? This way I could move all the finished files while the new files are written, so I don't have to increase the min free space to a massive size. I checked mover tuning, but I didn't see an option for this. [edit] learned that move tuning allowed exactly what I want, start moving (per share setting) as soon the drive is about to be full.
  10. Hey Guys, I'm running a nextcloud server where I sometimes get the issue that the cache is running full. This occurs, whenever we are uploading Video Files into one folder. Each file is smaller than the cache, but together they are way bigger and the upload happens before the mover is scheduled. So I expected as soon the Minimum free Space gets hit, the next file will automatically be written to the array. However, this does not happen, at least not if everything is in one folder or e.g. multiple folders are bigger than the cache together (so each user would upload his files into his own folder, but they start uploading before the cache is "full") . This is causing me quite a headdache, and I'm not sure what to do. So, can't I tell unraid to just move the files to the array from now on, even if it would mean to split up the folders over cache and array? In generall, this is possible, if the mover took place, the folder is on the array and now we are adding some more files to this folder. Alternatively, can't we invoke the mover for this specific cache as soon as free space < min free space? Are there any other options to prevent file loss for huge writes to my cache? I mean, the files individually are relatively small, we are recording using 4k video max, so it's not like that one file can exceed the 120GB of cache drive I'm using. Thank you in advance, Autchi
  11. Hey Guys, I'm using Unraid since quite a while now also the feature CPU pinning. Due to a change of my containers lately I wanted to update my CPU Pinning. However, as soon as I change the CPU pinning for at least one container and hit "apply" it will only show "Please wait..." with a spinning wheel in the UI which will never end. I tried it with Containers on and containers off and I checked the log for error messages while doing so. No error message commes up, it just seems like it's doing nothing. Since I'm not a super expert in debugging stuff like this I was hoping someone here could help me figure out what I need to do. I'm running an i5-12400. Have a great day Autchi
  12. Hey Guys, I'm posting this here, since it feels like an issue of the engine, rather than the container. I've got 2 nextcloud containers, one is nextcloud_prod, the other is nextcloud_test. This was working fine until recently, I changed some stuff, but reverted it back and now it's not working any more. When I'm trying to start the container I get the message The command failed: docker run -d --name='nextcloud_test' --net='swag' --cpuset-cpus='2,4,6,8,10,3,5,7,9,11' -e TZ="Europe/Berlin" -e HOST_OS="Unraid" -e HOST_HOSTNAME="server" -e HOST_CONTAINERNAME="nextcloud_test" -e 'PHP_MEMORY_LIMIT'='8192M' -l net.unraid.docker.managed=dockerman -l net.unraid.docker.webui='http://nextcloud.happyheppos.eu' -l net.unraid.docker.icon='https://decatec.de/wp-content/uploads/2017/08/nextcloud_logo.png' -p '3080:80/tcp' -v '/mnt/user/appdata/nextcloud_test/nextcloud':'/var/www/html':'rw' -v '/mnt/user/appdata/nextcloud_test/apps':'/var/www/html/custom_apps':'rw' -v '/mnt/user/appdata/nextcloud_test/config':'/var/www/html/config':'rw' -v '/mnt/user/nextcloud_data_prod':'/var/www/html/data':'rw' --user 99:100 --sysctl net.ipv4.ip_unprivileged_port_start=0 'nextcloud:production' && docker exec -u 0 nextcloud_test /bin/sh -c 'echo "umask 000" >> /etc/apache2/envvars' && docker exec -u 0 nextcloud_test /bin/bash -c "apt update && apt install -y libmagickcore-6.q16-6-extra ffmpeg imagemagick ghostscript" cab6b69a0570c5ce8417739b595a2a175f08e98ee01fca3a0fe37bceca09c7ad The command failed. Interestingly, this is basically only a copy/paste of the nextcloud_prod container which is working perfectly fine, the only thing I adapted is the container name for the Post arguments. Any ideas what might be going wrong here? EDIT: I tried removing the && docker exec -u 0 nextcloud_test /bin/bash -c "apt update && apt install -y libmagickcore-6.q16-6-extra ffmpeg imagemagick ghostscript" from the post arguments and out of the sudden it's starting just fine and just calling the command manually (after the container is started without it) is also working like a charm. Thank you in advance, Autchi
  13. Can this plugin be extended to read activities as well (and probably limit it to only some disks of the array)? I've got some reads that prevent disks from spinning down, but I have no Idea where they are comming from.
  14. Hey Guys, something is preventing my disc to spin down due to reads/writes every 30 seconds (created a video of it and double checked). The picture shows a read/write session directly after clearing all stats, so everything is only due to the 30 second r/w session. The picture shows also iotop as a reference, which unfortunately doesn't show anything. What I looked into: -No Docker Container/VM is active, they are all stoped. -I used the open Files Plugin and this shows no open files on /mnt/diskX or /mnt/user. -iotop doesn't show any process using it, however shows disk write. Interestingly, it does not write the same to cache and cache2, eventhough they are running in a mirrored setup. Also, it seems like there are only writes on the ZFS drives, but the XFS drives are only haveing reads. Does anyone have an idea what might go wrong here? Doesn't seem like a big issue, but I don't like the extra power consumed due to the drives not spinning down and I'm a bit scared that something might be wrong. Have a good day Autchi
  15. Thank you, since you confirm my assumption, I'll go with this solution. 🙂