Jump to content

Autchirion

Members
  • Posts

    95
  • Joined

  • Last visited

Posts posted by Autchirion

  1. On 3/31/2024 at 2:50 PM, diehardbattery said:

    If you mean the Docker Safe New Perms then I did try that, unfortunately it didn't help.

    actually, this might even destroy it! This does reset the user:group to nobody:nobody and read write to 777. If I do that to my data nextcloud won't start!

  2. Hey Guys,

     

    I've got a share /mnt/user/nextclouddata where I store all my data for nextcloud. This is not in appdata since I want appdata to be exclusively on the cache to speed it up without the the /mnt/user overhead.

    Nextcloud requires the data folder to be owned by 33:33 with the rights 770. However, every now and then (not sure when) it's being reset to 777 and nobody:users. Is there any way to prevent this? Every time this happens my nextcloud is giving me a warning that this happened and just stops working.

     

    Thank you in advance,

    Autchi

  3. Hey Guys,

     

    I'm getting an error "chmod(): Operation not permitted at /var/www/html/lib/private/Log/File.php#86" in my log lately.

    There are two issues I'm observing:

    1. Attachements to E-Mails aren't being uploaded (sometimes)
    2. when the container gets an update, I have to edit the template and click save, so that it completely reruns all commands.

    Any Ideas what's going on?

     

    Extra Parameters:

    --user 99:100 --sysctl net.ipv4.ip_unprivileged_port_start=0

    Post Arguments:

    && docker exec -u 0 nextcloud_prod /bin/sh -c 'echo "umask 000" >> /etc/apache2/envvars' && docker exec -u 0 nextcloud_prod /bin/bash -c "apt update && apt install -y libmagickcore-6.q16-6-extra ffmpeg imagemagick ghostscript" && docker network connect redis nextcloud_prod && docker network connect mariadb nextcloud_prod

     

  4. On 1/24/2024 at 1:01 AM, priv-leon said:

    Im looking at some SSO intergaton but everything needs to be configured in a .env file. Is this located somewhere or is the docker image itself?

     

    you just edit the template and add these two as environment variables, be aware to edit the values in <> to your needs. Also, the phone app does not support oauth, so your uses still need a password if they want to use the app.

     

    SOCIAL_PROVIDERS: allauth.socialaccount.providers.openid_connect
    
    SOCIALACCOUNT_PROVIDERS: { "openid_connect": { "SERVERS": [ { "id": "authentik", "name": "Authentik", "server_url": "https://<auth.domain.tld>/application/o/<appname>/.well-known/openid-configuration", "token_auth_method": "client_secret_basic", "APP": { "client_id": "<your client id>", "secret": "<your secret>" } } ] } }

     

  5. I just updated to the version 11.3.2 which was released 7h ago, since then mariadb is broken. I had to roll back to a backup from tonight and go back to version mariadb:11.2.3.

     

    Anyone else facing this issue? I wasn't able to connect any more, every time phpmyadmin/own scripts/nextcloud tried to connect it would resolve in an errormessage posted in the log.

    • Upvote 3
  6. I'm starting to get rid of login screens and using authentik to authenticate at basically every service I'm hosting myself.

    Add support for any other authentication method in order to be able to use authentik or other identity providers to log into Unraid.

     

    Examples:

    *arrs are using basic auth

    jellyseer supports LDAP

    nextcloud supports oauth2

     

    so basically everything on my network supports this by now so the only service I have to login at the moment is unraid. With the future of passkeys and other authentication Unraid should support more possibilities to log in. Of course HTTP Basic Auth might be the best option, since it allows users to still use the login screen and this is way less effort than implementing oauth2 into the root user.

  7. It is made to handle bursts, but sometimes these bursts are to big in size to be handled by the cache. I was hoping to find a solution without throwing money (aka bigger cache, not reusing my first SSD) at it.

  8. Yeah, my Problem is, cache is only 120GB, if I set it to e.g. move only at 80% I'd need to run it every 3.4 Minutes (assuming 1Gbit/s upload rate which is my home network speed) to make sure it never fills up.

  9. Somehow it seems to me, that moving when the cache is full isn't working.

    • I've got a share with the "Move All from Cache-Yes shares when disk is above a certain percentage" yes.
    • Two caches (Naming Cache and Datacache), where for this share the Datacache (~120GB) is used
    • I'm downloading ~100 files with ~2GB each, files are beeing pre generated (~1KB) and then after this filled with data. -> Mover isn't being activated when Datacache is 80% full.

    I'm not sure if I set up something wrong or if this usecase isn't covered by mover tuning.

     

    [EDIT] since it sounded similar, cache.cfg and datacache.cfg exist, I'm not sure if it is case sensitive, because the filename is lowercase, but the first letter of the Cache Name in the UI is uppercase.

     

    Mover Tuning - Share settings:

    image.thumb.png.b1d97e7914656e636d9d7a6e37c7c9df.png

     

    Mover Tuning Settings:

    image.png.f5bc5702bb412015b87447c7324ab2ee.png

  10. Thank you for your reply!

    Sorry, native languate isn't english, so sometimes I still think I made something clear, but obivously didn't. You are right, this happens if the users start the transfer at the same time. Of course this won't work and I should have thought of this, my bad. I will need to increas the Min Free Space zu users * max filesize obviously to cover this usecase as well.

     

    Is there any way to invoke the mover when the min free space is hit? This way I could move all the finished files while the new files are written, so I don't have to increase the min free space to a massive size. I checked mover tuning, but I didn't see an option for this.

     

    [edit] learned that move tuning allowed exactly what I want, start moving (per share setting) as soon the drive is about to be full.

  11. Hey Guys,

     

    I'm running a nextcloud server where I sometimes get the issue that the cache is running full. This occurs, whenever we are uploading Video Files into one folder. Each file is smaller than the cache, but together they are way bigger and the upload happens before the mover is scheduled.

    So I expected as soon the Minimum free Space gets hit, the next file will automatically be written to the array. However, this does not happen, at least not if everything is in one folder or e.g. multiple folders are bigger than the cache together (so each user would upload his files into his own folder, but they start uploading before the cache is "full") . This is causing me quite a headdache, and I'm not sure what to do.

     

    So, can't I tell unraid to just move the files to the array from now on, even if it would mean to split up the folders over cache and array? In generall, this is possible, if the mover took place, the folder is on the array and now we are adding some more files to this folder.

    Alternatively, can't we invoke the mover for this specific cache as soon as free space < min free space?

    Are there any other options to prevent file loss for huge writes to my cache? I mean, the files individually are relatively small, we are recording using 4k video max, so it's not like that one file can exceed the 120GB of cache drive I'm using.

     

    Thank you in advance,

    Autchi

  12. Hey Guys,

     

    I'm using Unraid since quite a while now also the feature CPU pinning. Due to a change of my containers lately I wanted to update my CPU Pinning. However, as soon as I change the CPU pinning for at least one container and hit "apply" it will only show "Please wait..." with a spinning wheel in the UI which will never end.

    I tried it with Containers on and containers off and I checked the log for error messages while doing so. No error message commes up, it just seems like it's doing nothing.

     

    Since I'm not a super expert in debugging stuff like this I was hoping someone here could help me figure out what I need to do. I'm running an i5-12400.

     

    Have a great day

    Autchi

  13. Hey Guys,

     

    I'm posting this here, since it feels like an issue of the engine, rather than the container. I've got 2 nextcloud containers, one is nextcloud_prod, the other is nextcloud_test. This was working fine until recently, I changed some stuff, but reverted it back and now it's not working any more.

     

    When I'm trying to start the container I get the message The command failed:

    docker run
      -d
      --name='nextcloud_test'
      --net='swag'
      --cpuset-cpus='2,4,6,8,10,3,5,7,9,11'
      -e TZ="Europe/Berlin"
      -e HOST_OS="Unraid"
      -e HOST_HOSTNAME="server"
      -e HOST_CONTAINERNAME="nextcloud_test"
      -e 'PHP_MEMORY_LIMIT'='8192M'
      -l net.unraid.docker.managed=dockerman
      -l net.unraid.docker.webui='http://nextcloud.happyheppos.eu'
      -l net.unraid.docker.icon='https://decatec.de/wp-content/uploads/2017/08/nextcloud_logo.png'
      -p '3080:80/tcp'
      -v '/mnt/user/appdata/nextcloud_test/nextcloud':'/var/www/html':'rw'
      -v '/mnt/user/appdata/nextcloud_test/apps':'/var/www/html/custom_apps':'rw'
      -v '/mnt/user/appdata/nextcloud_test/config':'/var/www/html/config':'rw'
      -v '/mnt/user/nextcloud_data_prod':'/var/www/html/data':'rw'
      --user 99:100
      --sysctl net.ipv4.ip_unprivileged_port_start=0 'nextcloud:production' && docker exec
      -u 0 nextcloud_test /bin/sh
      -c 'echo "umask 000" >> /etc/apache2/envvars' && docker exec
      -u 0 nextcloud_test /bin/bash
      -c "apt update && apt install
      -y libmagickcore-6.q16-6-extra ffmpeg imagemagick ghostscript"
    cab6b69a0570c5ce8417739b595a2a175f08e98ee01fca3a0fe37bceca09c7ad
    
    The command failed.

     

    Interestingly, this is basically only a copy/paste of the nextcloud_prod container which is working perfectly fine, the only thing I adapted is the container name for the Post arguments. Any ideas what might be going wrong here?

     

    EDIT: I tried removing the && docker exec -u 0 nextcloud_test /bin/bash -c "apt update && apt install -y libmagickcore-6.q16-6-extra ffmpeg imagemagick ghostscript" from the post arguments and out of the sudden it's starting just fine and just calling the command manually (after the container is started without it) is also working like a charm.

     

    Thank you in advance,

    Autchi

  14. Hey Guys,

     

    something is preventing my disc to spin down due to reads/writes every 30 seconds (created a video of it and double checked).

     

    The picture shows a read/write session directly after clearing all stats, so everything is only due to the 30 second r/w session. The picture shows also iotop as a reference, which unfortunately doesn't show anything.

    image.thumb.png.48fd33b0d748e41c07271d4521e9631f.png

     

    What I looked into:

    -No Docker Container/VM is active, they are all stoped.

    -I used the open Files Plugin and this shows no open files on /mnt/diskX or /mnt/user.

    -iotop doesn't show any process using it, however shows disk write.

     

    Interestingly, it does not write the same to cache and cache2, eventhough they are running in a mirrored setup. Also, it seems like there are only writes on the ZFS drives, but the XFS drives are only haveing reads.

     

    Does anyone have an idea what might go wrong here? Doesn't seem like a big issue, but I don't like the extra power consumed due to the drives not spinning down and I'm a bit scared that something might be wrong.

     

    Have a good day

    Autchi

  15. Hello Folks,

     

    I converted my cache and one drive to zfs. One of the main reasons was, that I want to use zfs snapshots to reduce downtime of my containers. But here comes nextcloud, I set my nextcloud to store data on the cache first and then move it to the disk at night. So, I'd need to create a backup of the cache and the zfs disk.

     

    let's assume ran the following command:

    zfs snapshot cache/zfs_test@test & zfs snapshot disk2/zfs_test@test

     

    Can I safely access /mnt/user/zfs_test/.zfs/snapshot/test and get the data from out of there and get all elements from both snapshots? I'd say yes, but since it's critical data I want to make sure.

     

    Also, if someone has a better suggestion on how to handle this type of scenario, let me know!

     

    Thank you for your help,

    Autchi

  16. On 2/22/2023 at 6:35 AM, SentouProject said:

    413: The file is too large.

     

    I didn't have this problem on the linuxserver.io version of Nextcloud but moving to this one, I have been struggling to sync or upload any file over 100MB. 

     

    I have managed to change the "Upload max size:" by editing both the /nextcloud/html/.htaccess and /nextcloud/html/.user.ini file with

    php_value upload_max_filesize 16G

    php_value post_max_size 16G

    php_value memory_limit 8G

    php_value max_input_time 3600

    php_value max_execution_time 3600

    Still can't upload any file over 100MB though. Literally pasting the codes I find all over my nextcloud docker folder via trial and error to get see what does what.

     

    I understand that I need to edit the Apache file as per docs.nextcloud.com but can't find it. Or was that the apache file? Everything else seems to be working fine. The idea is really to get off Google Drive.

     

    same (or simmilar) issue here if you are using the app (which I assume because of error 413). I can upload massive files (>20GB) via my chrome webbrowser on my PC but not via App.

    Do I assume correctly you are refering to the App? Also, did you try to upload via browser? And finaly, don't forget if you are using cloudflare and use the proxy settings to hide your IP, max filesize is 100MB you will have to activate uploading large files as chungs in the app. You can do this when going into the settings of the app and there should be an option like "advance" (sorry, not using the english translation of the app) you can set the chunk sitze for the file in the app.

     

    This issue seems new to me, because I'm quite sure I downloaded massive files from the app before. So, what I'm wondering is, did somethin go wrong in the app since a last update or so.

     

    Update:

    -I completely deleted my account from my phone

    -logged in again (didn't use an app login, but normal login)

    -set the chung size to 90MB

    -activated the auto upload and voila, was able to upload my 2GB video File

  17. On 1/23/2023 at 9:50 PM, alturismo said:

    wie mehrfach hier beschrieben, Treiber deinstallieren und dann (ohne Neustart) direkt vom ISO installieren, sollte helfen.

     

    VNC, anderen Browser versucht ? Plugins im Browser ?

     

    danke für die Info, habe nochmal gelesen und festgestellt, dass ich was falsch verstanden hatte! Netzwerk läuft jetzt.

     

    VNC läuft auch wieder, interessanterweiße habe ich nichts verändert, keine Ahnung was das für ein Schluckauf war.

  18. Ich habe es gerade mit Windows 10 und 11 versucht, klappt leider beides nicht, scheitert am Netzwerk.

    Bei Windows 10 konnte ich das überspringen und wollte dann den Treiber über den Gerätemanager nachinstallieren. Es wird mir der Ethernet Controller angezeigt, Treiber aktualisieren, CD Laufwerk etc...er findet den Treiber und hängt ewig bei Treiber installieren. Wenn er dann mal irgendwann fertig kommt diese Meldung:

     

    image.png.cb488f3e34d2790a77a106d05ade3c08.png

     

     

    Mehfrach neu installiert, Machine Q35-7.1 mit virtio 0.1.229-1.

    Zusätzliches Problem, mit dem VNC tool von Unraid komme ich nicht drauf (failed to connect to server), mit tightVNC auf einem anderen PC klappt es dann wunderbar zu verbinden.

     

    Irgend welche Ideen? Ich komme nicht weiter, hatte Win 10 vorher drauf, da hat es perfekt funktioniert, jetzt alles neu gemacht und plötzlich geht es nicht mehr.

  19. uh, that's not happy news, since with the version 3.4.4 I'm getting an error with Borg Backup.

     

    Now, it seems like borg backup requires the file libffi.so.7 but libffi 3.4.4 doesn't have this file any more. Any known workarounds (except installing it manually via go file, which I don't like)?

    Traceback (most recent call last):
      File "/usr/lib64/python3.9/site-packages/borg/archiver.py", line 41, in <module>
        from .archive import Archive, ArchiveChecker, ArchiveRecreater, Statistics, is_special
      File "/usr/lib64/python3.9/site-packages/borg/archive.py", line 20, in <module>
        from . import xattr
      File "/usr/lib64/python3.9/site-packages/borg/xattr.py", line 9, in <module>
        from ctypes import CDLL, create_string_buffer, c_ssize_t, c_size_t, c_char_p, c_int, c_uint32, get_errno
      File "/usr/lib64/python3.9/ctypes/__init__.py", line 8, in <module>
        from _ctypes import Union, Structure, Array
    ImportError: libffi.so.7: cannot open shared object file: No such file or directory

     

  20. Hello all,

     

    I need libffi to be able to run borg backup. For this I did these two steps, after running the last command I was able to run borg backup. 

    # Download libffi from slackware.uk to /boot/extra:
    wget -P /boot/extra/ https://slackware.uk/slackware/slackware64-15.0/slackware64/l/libffi-3.3-x86_64-3.txz
    
    # Install the package:
    installpkg /boot/extra/libffi-3.3-x86_64-3.txz

    What I understand, if a package ist stored in /boot/extra/ it will be automatically installed. However, as soon as I reboot it won't be auto installed. My other packets are beeing installed, except for this one.

    Any clues why that is?

     

    Thank you in advance

    Autchi

×
×
  • Create New...