Jump to content

Kevek79

Members
  • Posts

    124
  • Joined

  • Last visited

Everything posted by Kevek79

  1. Good Evening everybody so about 20 hours have passed and a new evening started that I can work on resolving this issue. Regarding krusader, recycle bin was my first thought too (even if I always try to make sure that I delete directly not using it at all), so I checked and it had only one file (couple of KB in size) in there. Before I start with deleting the docker image I just rechecked the Container sizes and compared it to the values from yesterday The only thing growing since yesterday is the Collabora container. As I have never deleted my complete docker image before I'm a bit nervous about what could go wrong. Is the procedure I described above correct ? Is there anything that could go wrong when deleting and recreating the docker image (In theory I know that everything should be fine, but still a little nervous) Should I backup the current docker image before deleting it? If so, does the filesystem of the backup targetdrive make any difference (e..g. needs to be XFS or BTRFS) Would deleting a single docker container (including the than orphan container image) free up the space in the docker image ? If so, could deleting a single container (e.g. Collabora) and than recreating it from scratch reset the container size to where it should be? As it looks like Collabora is the main driver for the docker image growing, I would rather try to solve the issue with this one container than wiping the complete docker image. Does this make sense? edit: I just realized that the Collabora Container is the only one (besides the duckdns docker) that has no '/config' mapping to the array (the template just does not contain any drive mappings). Is this expected behaviour? Maybe I missed something in configuration and thats the reason why Collabora is exploding!?
  2. No problem, been there done that i would not tinker with any other settings not knowing what they do.
  3. Is your upload issue solved with changing the client-max-body-size value?
  4. I can try that tomorrow. Thanks @BRiT. I was just editing my last post to include the question if recreating the docker image might be helpful in finding the root cause, when you sent your reply Just to make sure I understand the procedure 1. shut down all running dockers 2. Stop the docker service 3. Delete the docker image (Do I need to delete via CLI or can this also be performed via SMB?) 4. Restart docker service which should create a new docker image file 5. Add all my dockers using the user templates without any modifications All my containers should than be back where they were (function wise) before deleting the docker image - Right? Than start taking pictures of the docker container sizes as suggested above. edit: Just realized that deleting the docker image can be performed in the GUI.
  5. Thanks @saarg for the tip with the dockerhub. when I compare the containersizes in my list above with the latest build tags listed on the respective dockerhub pages, than none of my top 6 containers in the list should be bigger than 500 M I checked my templates of the containers affected and everything that should point out of the containers to the array seems to be configured that way. What makes me especially curious is what the reason for the teamspeak container being 1 G bigger than expected might be, as there is nothing to missconfigure in the templates (No paths to be linked to the array e.g.) Also I checked the log file of the collabora container via the GUI and there are a lot of entries (mostly white, some yellow when I open a file via nextcloud), but honestly speaking, I do not understand what most of them mean. The container itself works as expected. What could be my next steps to resolve this? One more question on interpreting the Container Size listing above. What does Writeable mean in that table and do you know why this could be 3.3 GB in the case of the collabora container?
  6. Are you running nextcloud with reverse proxy via letsencrypt? If so I would guess that this is because of a limit in one of the nginx config files. Best bet is /appdata/letsencrypt/nginx/proxy.conf but there are a couple of other config files that set the same parameter. Parameter to look for is 'client_max_body_size' which is most probably set to 10m. You could either set that to 0 to disable limitations or set a bigger limit.
  7. Hi, I am trying to pinpoint which of my docker containers is missbehaving, as my docker image seems to grow constantly. I understand from reading thrue the forum that for most users a 20 GB docker image is plenty. My docker image is now on 71% utilisation and I got a warnig in the dashboard. As I do not consider myself special in this regard, I guess something is not working as expected. I checked the container sizes, and that was the output I got: What is the typical size of a container within the docker image? Collabora and the binhex containers all use way more space than all the other containers. Expecially the Collabora container seems unreasonable as it is nearly 10 times the size of e.g. the Plex container. I only use collabora within nextcloud (both setup with the help of @SpaceInvaderOne's youtube video) to have the possibility to write a document in the browser, but the events we really use it are very rare. Has anyone had a similar issue? Can anyone give me a hint on the root cause for the dockerimage growing? What would be the best approach to correct this issue.
  8. Thats what I thought But you allready got your answer thanks to @itimpi
  9. is anyone successfully using the Mail app within nextcloud? Are there any dependencies needed to be installed additionally. Can't get the app to show any emails in my inbox (folders are shown instantly but no content) Has anyone encountered similar issues ? I am running v16.0.2 edit: tried to connect to another (much smaller) imap-inbox and it worked - somewhat Tried to view some emails, but no pictures are shown, even if I hit the respective button to show them when trying to download an attachment I got an error message (displayed on a page looking like my login screen) at that point I looked into the nextcloud log and recognized a bunch of errors that were recorded during me testing the mail app. most of them are related to "Horde_Imap_Client_Exception_ServerResponse" Would it help to upload any diagnostics? Do I miss something completely here (dependencies, db configuration, php-config, etc.)? Contact and Calender app are working like a breeze and I thought I give the Mail-app a try to round up the services, but so far this looks not good
  10. Is it possible that those shares are configured to be on the cache drive? Do you have a single cache drive or a cache pool ?
  11. Thanks @Squid I will install in the evening and see if everything works as expected.
  12. Hi Folks, I just received my first LSI HBA (9207-8i). As far as my research goes this should work out of the box with unraid. Currently I have all my data disks, parity and cachedrives connected to the onboard sata controller. The plan is to leave my cache drives on the sata controller and move the data and parity disks to the LSI controller. How does unraid identify a drive? Is this anyhow dependant on the controller it is connected to? Is their anything I have to take care of to make switching the drives from one controller to the other work, or is it as simple as installing the card connecting the drive, boot up and start the array again?
  13. I used krusader (docker container) to get into appdata/nextcloud and deleted all DS_Stre files I could find. also look in the subfolders
  14. My btrfs filesystem df now reports this: Data, RAID1: total=410.00GiB, used=393.46GiB System, RAID1: total=32.00MiB, used=96.00KiB Metadata, RAID1: total=1.00GiB, used=405.33MiB GlobalReserve, single: total=116.19MiB, used=0.00B So everything is in RAID 1 now. Thanks It only shows 410 GiB Data, but the Pool has 1TB Size. Is this just a bug in displaying the total size, because on dashboard it shows the 1TB as expected, or should I be worried ?
  15. Thanks a lot @saarg - I'm now on the current versions of nextcloud and the respective docker container and everything is running great. Now I need to learn whats new in nextcloud 16
  16. I managed to update to version 15.0.9 and everything is running fine (still with the container version 16.0.1-ls22). only one warning about missing indices and one about big-init conversion which should be resolved on the command line (instructions given in the warning). Only one question: As the two commands that are required ('occ db:add-missing-indices' and 'occ db:convert-filecache-bigint') are database related, am I right to assume that those commands have to be run in the mariaDB container? Or do they need to be used in the nextcloud docker ? I'd like to have all lights green before finally updating to version 16
  17. I also was running into the PHP issue described above and reverted the nextcloud docker back to 16.0.1-ls22 and boom, nextcloud is up and running again. Mac Desktop Client is reconnected and in sync. As the current version of nextcloud is 16 i think I should upgrade my system Currently I am running version 14.0.3 From what I understand I should be able to upgrade to 15.0.8 from within the Web GUI (at least that is the zip file that is in the stable channel). But if I start the updater it stops in the first step because of unexpected files .DS_Store and ._.DS_Store I understand that such files are normaly produced by Mac-OS but I have no idea how to proceed form here. Any hints? edit Does the file Check performed by the updater check the content of the appdata/nextcloude folder? Because I shared the appdata folder via smb while configuring some dockers and therefore have left some of these .DS_Store files also in the appdata/nextcloud folder. If I remove them via krusader, should help pass the test from the updater? SOLVED: Deleted the .DS_Store files in the appdata/nextcloud folder and ran the GUI Updater, currently nextcloud instance is running version 15.0.9.1 - Yipee
  18. Thanks @johnnie.black Will do and report result when I am back home in front of my server
  19. Hello, I'm pulling up this old thread because I have a simillar issue and would like to ask for some help in determining if everything is configured and behaving as it should. I upgraded my 500 GB M.2. Cahce Drive with a second one (also 500 GB) to get some redundancy in the cache. Everything looked OK from my perspective after installing the second drive and it started the automatic balancing and creation of an RAID1. As the cache was getting full and I had a spare 1TB Samsung EVO SATA SSD laying around, I installed it into my server and added it as a third drive in the cache pool. After Unraid did its balancing magic, in dashboard everything was looking OK and I had the expected total size of 1TB Cachepool. But Here comes the question. When inspecting the "btrfs filesystem df" - field in the GUI it is reporting RAID1 for Data but only shows around 500 GB in size. Furthermore the metadata seams to be in single mode. Looking at the pictures from 'ksignorini' metadata is also in RAID1 what I would expect to be correct. Do I need to run a manual balance to assure data and metadata protection or should I leave it as is? Thanks in advance for any input from someone who knows more than me about cache drives
  20. I have a question to the system temp. plugin. I am running version 6.7.0 on an ASUSTeK COMPUTER INC. ROG STRIX Z370-G Mainboard. When I try to detect available drivers I only get "coretemp" which comes directly from the cpu if I am correct but I get nothing from the motherboard itself e.g. MB Temp, temperature of my drivecage or Fanspeeds etc. Can anyone give me a hint or link to a ressource to solve this?
  21. Hi, I have moved to unraid about 6 months ago and all is running good so far. Still learning linux a bit every day. I am running Nextcloud 14.0.3 with mariaDB. I would like to upgrade to version 15 now and have allready read the linked page one post from CHBMB, but being new to linux and unraid I have a question before I start my next command line safari Am I right that the first command (docker exec) is only necessary If I start from the host machines commandline? Can I use the "Console" option from the webinterface to perform the same task and if so, do i still need the docker exec command from the description ? I would guess no. Do I need the docker container to be configured for Bash (Console shell command - in the edit docker screen from the web interface) to be in the correct environment if I use the "Console" option from the webinterface. Any hints would be appreciated. Thanks in advance Toby
×
×
  • Create New...